00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1742 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3003 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.011 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.014 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.029 Fetching changes from the remote Git repository 00:00:00.031 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.052 Using shallow fetch with depth 1 00:00:00.052 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.052 > git --version # timeout=10 00:00:00.097 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.098 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.098 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.096 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.112 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.127 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:02.127 > git config core.sparsecheckout # timeout=10 00:00:02.142 > git read-tree -mu HEAD # timeout=10 00:00:02.162 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:02.188 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:02.188 > git rev-list --no-walk aefc5e1436f0fee50fc4c8c4d1132172bdc97d4a # timeout=10 00:00:02.433 [Pipeline] Start of Pipeline 00:00:02.445 [Pipeline] library 00:00:02.446 Loading library shm_lib@master 00:00:02.446 Library shm_lib@master is cached. Copying from home. 00:00:02.463 [Pipeline] node 00:00:02.474 Running on FCP03 in /var/jenkins/workspace/dsa-phy-autotest 00:00:02.475 [Pipeline] { 00:00:02.487 [Pipeline] catchError 00:00:02.489 [Pipeline] { 00:00:02.503 [Pipeline] wrap 00:00:02.511 [Pipeline] { 00:00:02.519 [Pipeline] stage 00:00:02.520 [Pipeline] { (Prologue) 00:00:02.698 [Pipeline] sh 00:00:02.983 + logger -p user.info -t JENKINS-CI 00:00:03.001 [Pipeline] echo 00:00:03.005 Node: FCP03 00:00:03.012 [Pipeline] sh 00:00:03.313 [Pipeline] setCustomBuildProperty 00:00:03.327 [Pipeline] echo 00:00:03.328 Cleanup processes 00:00:03.334 [Pipeline] sh 00:00:03.617 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.617 2767642 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.630 [Pipeline] sh 00:00:03.916 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.916 ++ grep -v 'sudo pgrep' 00:00:03.916 ++ awk '{print $1}' 00:00:03.916 + sudo kill -9 00:00:03.916 + true 00:00:03.927 [Pipeline] cleanWs 00:00:03.936 [WS-CLEANUP] Deleting project workspace... 00:00:03.936 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.942 [WS-CLEANUP] done 00:00:03.946 [Pipeline] setCustomBuildProperty 00:00:03.959 [Pipeline] sh 00:00:04.244 + sudo git config --global --replace-all safe.directory '*' 00:00:04.299 [Pipeline] nodesByLabel 00:00:04.300 Could not find any nodes with 'sorcerer' label 00:00:04.303 [Pipeline] retry 00:00:04.305 [Pipeline] { 00:00:04.322 [Pipeline] checkout 00:00:04.328 The recommended git tool is: git 00:00:04.339 using credential 00000000-0000-0000-0000-000000000002 00:00:04.343 Cloning the remote Git repository 00:00:04.346 Honoring refspec on initial clone 00:00:04.352 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.352 > git init /var/jenkins/workspace/dsa-phy-autotest/jbp # timeout=10 00:00:04.359 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.359 > git --version # timeout=10 00:00:04.362 > git --version # 'git version 2.43.0' 00:00:04.362 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.363 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.363 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:00:10.752 Avoid second fetch 00:00:10.771 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:10.875 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:10.884 [Pipeline] } 00:00:10.905 [Pipeline] // retry 00:00:10.917 [Pipeline] nodesByLabel 00:00:10.919 Could not find any nodes with 'sorcerer' label 00:00:10.925 [Pipeline] retry 00:00:10.927 [Pipeline] { 00:00:10.948 [Pipeline] checkout 00:00:10.956 The recommended git tool is: NONE 00:00:10.966 using credential 00000000-0000-0000-0000-000000000002 00:00:10.972 Cloning the remote Git repository 00:00:10.975 Honoring refspec on initial clone 00:00:10.739 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.745 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:10.758 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.767 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.776 > git config core.sparsecheckout # timeout=10 00:00:10.780 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:10.981 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:10.981 > git init /var/jenkins/workspace/dsa-phy-autotest/spdk # timeout=10 00:00:10.988 Using reference repository: /var/ci_repos/spdk_multi 00:00:10.988 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:10.988 > git --version # timeout=10 00:00:10.991 > git --version # 'git version 2.43.0' 00:00:10.991 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.992 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.992 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/v24.01.x +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:44.630 Avoid second fetch 00:00:44.646 Checking out Revision 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 (FETCH_HEAD) 00:00:44.917 Commit message: "bdev/nvme: Fix the case that namespace was removed during reset" 00:00:44.614 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:44.619 > git config --add remote.origin.fetch refs/heads/v24.01.x # timeout=10 00:00:44.623 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:44.636 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:44.644 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:44.652 > git config core.sparsecheckout # timeout=10 00:00:44.656 > git checkout -f 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 # timeout=10 00:00:44.923 > git rev-list --no-walk 3f2c8979187809f9b3b0766ead4b91dc70fd73c6 # timeout=10 00:00:44.968 > git remote # timeout=10 00:00:44.972 > git submodule init # timeout=10 00:00:45.046 > git submodule sync # timeout=10 00:00:45.119 > git config --get remote.origin.url # timeout=10 00:00:45.129 > git submodule init # timeout=10 00:00:45.186 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:45.192 > git config --get submodule.dpdk.url # timeout=10 00:00:45.195 > git remote # timeout=10 00:00:45.199 > git config --get remote.origin.url # timeout=10 00:00:45.203 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:45.207 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:45.211 > git remote # timeout=10 00:00:45.216 > git config --get remote.origin.url # timeout=10 00:00:45.220 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:45.223 > git config --get submodule.isa-l.url # timeout=10 00:00:45.227 > git remote # timeout=10 00:00:45.231 > git config --get remote.origin.url # timeout=10 00:00:45.235 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:45.239 > git config --get submodule.ocf.url # timeout=10 00:00:45.242 > git remote # timeout=10 00:00:45.245 > git config --get remote.origin.url # timeout=10 00:00:45.248 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:45.252 > git config --get submodule.libvfio-user.url # timeout=10 00:00:45.255 > git remote # timeout=10 00:00:45.260 > git config --get remote.origin.url # timeout=10 00:00:45.264 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:45.268 > git config --get submodule.xnvme.url # timeout=10 00:00:45.271 > git remote # timeout=10 00:00:45.275 > git config --get remote.origin.url # timeout=10 00:00:45.278 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:45.282 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:45.286 > git remote # timeout=10 00:00:45.290 > git config --get remote.origin.url # timeout=10 00:00:45.293 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.299 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:45.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:45.300 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:57.097 [Pipeline] } 00:00:57.123 [Pipeline] // retry 00:00:57.133 [Pipeline] sh 00:00:57.422 + git -C spdk log --oneline -n5 00:00:57.423 36faa8c312b bdev/nvme: Fix the case that namespace was removed during reset 00:00:57.423 e2cb5a5eed9 bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:00:57.423 4b134b4abdb bdev/nvme: Delay callbacks when the next operation is a failover 00:00:57.423 d2ea4ecb14a llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:00:57.423 3b33f433344 test/nvme/cuse: Fix typo 00:00:57.436 [Pipeline] } 00:00:57.455 [Pipeline] // stage 00:00:57.465 [Pipeline] stage 00:00:57.467 [Pipeline] { (Prepare) 00:00:57.486 [Pipeline] writeFile 00:00:57.505 [Pipeline] sh 00:00:57.791 + logger -p user.info -t JENKINS-CI 00:00:57.806 [Pipeline] sh 00:00:58.092 + logger -p user.info -t JENKINS-CI 00:00:58.105 [Pipeline] sh 00:00:58.389 + cat autorun-spdk.conf 00:00:58.389 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.389 SPDK_TEST_ACCEL_DSA=1 00:00:58.389 SPDK_TEST_ACCEL_IAA=1 00:00:58.389 SPDK_TEST_NVMF=1 00:00:58.389 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.389 SPDK_RUN_ASAN=1 00:00:58.389 SPDK_RUN_UBSAN=1 00:00:58.397 RUN_NIGHTLY=1 00:00:58.402 [Pipeline] readFile 00:00:58.426 [Pipeline] withEnv 00:00:58.428 [Pipeline] { 00:00:58.442 [Pipeline] sh 00:00:58.730 + set -ex 00:00:58.730 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:00:58.730 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:58.730 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.730 ++ SPDK_TEST_ACCEL_DSA=1 00:00:58.730 ++ SPDK_TEST_ACCEL_IAA=1 00:00:58.730 ++ SPDK_TEST_NVMF=1 00:00:58.730 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.730 ++ SPDK_RUN_ASAN=1 00:00:58.730 ++ SPDK_RUN_UBSAN=1 00:00:58.730 ++ RUN_NIGHTLY=1 00:00:58.730 + case $SPDK_TEST_NVMF_NICS in 00:00:58.730 + DRIVERS= 00:00:58.730 + [[ -n '' ]] 00:00:58.730 + exit 0 00:00:58.741 [Pipeline] } 00:00:58.762 [Pipeline] // withEnv 00:00:58.769 [Pipeline] } 00:00:58.785 [Pipeline] // stage 00:00:58.798 [Pipeline] catchError 00:00:58.800 [Pipeline] { 00:00:58.815 [Pipeline] timeout 00:00:58.815 Timeout set to expire in 50 min 00:00:58.817 [Pipeline] { 00:00:58.837 [Pipeline] stage 00:00:58.839 [Pipeline] { (Tests) 00:00:58.855 [Pipeline] sh 00:00:59.141 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:00:59.141 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:00:59.141 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:00:59.141 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:00:59.141 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:59.141 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:00:59.141 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:00:59.141 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:59.141 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:00:59.141 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:59.141 + cd /var/jenkins/workspace/dsa-phy-autotest 00:00:59.141 + source /etc/os-release 00:00:59.141 ++ NAME='Fedora Linux' 00:00:59.141 ++ VERSION='38 (Cloud Edition)' 00:00:59.141 ++ ID=fedora 00:00:59.141 ++ VERSION_ID=38 00:00:59.141 ++ VERSION_CODENAME= 00:00:59.141 ++ PLATFORM_ID=platform:f38 00:00:59.141 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:59.141 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.141 ++ LOGO=fedora-logo-icon 00:00:59.141 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:59.141 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.141 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:59.141 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.141 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.141 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.141 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:59.141 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.141 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:59.141 ++ SUPPORT_END=2024-05-14 00:00:59.141 ++ VARIANT='Cloud Edition' 00:00:59.141 ++ VARIANT_ID=cloud 00:00:59.141 + uname -a 00:00:59.141 Linux spdk-fcp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:59.141 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:01:01.059 Hugepages 00:01:01.059 node hugesize free / total 00:01:01.059 node0 1048576kB 0 / 0 00:01:01.059 node0 2048kB 0 / 0 00:01:01.059 node1 1048576kB 0 / 0 00:01:01.059 node1 2048kB 0 / 0 00:01:01.059 00:01:01.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.059 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:01:01.059 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:01:01.059 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:01:01.059 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:01:01.059 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:01:01.059 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:01:01.059 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:01:01.059 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:01:01.059 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:01:01.320 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:01:01.320 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:01:01.320 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:01:01.320 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:01:01.320 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:01:01.320 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:01:01.320 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:01:01.320 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:01:01.320 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:01:01.320 + rm -f /tmp/spdk-ld-path 00:01:01.320 + source autorun-spdk.conf 00:01:01.320 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.320 ++ SPDK_TEST_ACCEL_DSA=1 00:01:01.320 ++ SPDK_TEST_ACCEL_IAA=1 00:01:01.320 ++ SPDK_TEST_NVMF=1 00:01:01.320 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.320 ++ SPDK_RUN_ASAN=1 00:01:01.320 ++ SPDK_RUN_UBSAN=1 00:01:01.320 ++ RUN_NIGHTLY=1 00:01:01.320 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.320 + [[ -n '' ]] 00:01:01.320 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:01.320 + for M in /var/spdk/build-*-manifest.txt 00:01:01.320 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.320 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:01.320 + for M in /var/spdk/build-*-manifest.txt 00:01:01.320 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.320 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:01.320 ++ uname 00:01:01.320 + [[ Linux == \L\i\n\u\x ]] 00:01:01.320 + sudo dmesg -T 00:01:01.320 + sudo dmesg --clear 00:01:01.320 + dmesg_pid=2769411 00:01:01.320 + [[ Fedora Linux == FreeBSD ]] 00:01:01.320 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.320 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.320 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.320 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.320 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.320 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.320 + sudo dmesg -Tw 00:01:01.320 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.320 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.320 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.320 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.320 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.321 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.321 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.321 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.321 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:01.321 Test configuration: 00:01:01.321 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.321 SPDK_TEST_ACCEL_DSA=1 00:01:01.321 SPDK_TEST_ACCEL_IAA=1 00:01:01.321 SPDK_TEST_NVMF=1 00:01:01.321 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.321 SPDK_RUN_ASAN=1 00:01:01.321 SPDK_RUN_UBSAN=1 00:01:01.321 RUN_NIGHTLY=1 16:00:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:01.321 16:00:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.321 16:00:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.321 16:00:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.321 16:00:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.321 16:00:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.321 16:00:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.321 16:00:00 -- paths/export.sh@5 -- $ export PATH 00:01:01.321 16:00:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.321 16:00:00 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:01.321 16:00:00 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:01.321 16:00:00 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713880800.XXXXXX 00:01:01.321 16:00:00 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713880800.9EvDY0 00:01:01.321 16:00:00 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:01.321 16:00:00 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:01.321 16:00:00 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:01:01.321 16:00:00 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.321 16:00:00 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.321 16:00:00 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:01.321 16:00:00 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:01.321 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.321 16:00:00 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:01.321 16:00:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.321 16:00:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.321 16:00:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:01.321 16:00:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.321 Tue Apr 23 02:00:00 PM UTC 2024 00:01:01.321 16:00:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.582 LTS-24-g36faa8c312b 00:01:01.582 16:00:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:01.582 16:00:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:01.582 16:00:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:01.582 16:00:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:01.582 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.582 ************************************ 00:01:01.582 START TEST asan 00:01:01.582 ************************************ 00:01:01.582 16:00:00 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:01.582 using asan 00:01:01.582 00:01:01.582 real 0m0.000s 00:01:01.582 user 0m0.000s 00:01:01.582 sys 0m0.000s 00:01:01.582 16:00:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.582 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.582 ************************************ 00:01:01.582 END TEST asan 00:01:01.582 ************************************ 00:01:01.582 16:00:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.582 16:00:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.582 16:00:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:01.582 16:00:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:01.582 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.582 ************************************ 00:01:01.582 START TEST ubsan 00:01:01.582 ************************************ 00:01:01.582 16:00:00 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:01.582 using ubsan 00:01:01.582 00:01:01.582 real 0m0.000s 00:01:01.582 user 0m0.000s 00:01:01.582 sys 0m0.000s 00:01:01.582 16:00:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.582 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.582 ************************************ 00:01:01.582 END TEST ubsan 00:01:01.582 ************************************ 00:01:01.582 16:00:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.582 16:00:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.582 16:00:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:01.582 16:00:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:01.582 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:01:01.583 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:01.844 Using 'verbs' RDMA provider 00:01:14.653 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:24.682 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:24.682 Creating mk/config.mk...done. 00:01:24.682 Creating mk/cc.flags.mk...done. 00:01:24.682 Type 'make' to build. 00:01:24.682 16:00:23 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:24.682 16:00:23 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:24.682 16:00:23 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:24.682 16:00:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.682 ************************************ 00:01:24.682 START TEST make 00:01:24.682 ************************************ 00:01:24.682 16:00:23 -- common/autotest_common.sh@1104 -- $ make -j128 00:01:24.682 make[1]: Nothing to be done for 'all'. 00:01:29.953 The Meson build system 00:01:29.953 Version: 1.3.1 00:01:29.953 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:29.953 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:29.953 Build type: native build 00:01:29.953 Program cat found: YES (/usr/bin/cat) 00:01:29.953 Project name: DPDK 00:01:29.953 Project version: 23.11.0 00:01:29.953 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.953 C linker for the host machine: cc ld.bfd 2.39-16 00:01:29.953 Host machine cpu family: x86_64 00:01:29.953 Host machine cpu: x86_64 00:01:29.953 Message: ## Building in Developer Mode ## 00:01:29.953 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.953 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:29.953 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.953 Program python3 found: YES (/usr/bin/python3) 00:01:29.953 Program cat found: YES (/usr/bin/cat) 00:01:29.953 Compiler for C supports arguments -march=native: YES 00:01:29.953 Checking for size of "void *" : 8 00:01:29.953 Checking for size of "void *" : 8 (cached) 00:01:29.953 Library m found: YES 00:01:29.953 Library numa found: YES 00:01:29.953 Has header "numaif.h" : YES 00:01:29.953 Library fdt found: NO 00:01:29.953 Library execinfo found: NO 00:01:29.953 Has header "execinfo.h" : YES 00:01:29.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.953 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.953 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.953 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.953 Run-time dependency openssl found: YES 3.0.9 00:01:29.953 Run-time dependency libpcap found: YES 1.10.4 00:01:29.953 Has header "pcap.h" with dependency libpcap: YES 00:01:29.953 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.953 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.953 Compiler for C supports arguments -Wformat: YES 00:01:29.953 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.953 Compiler for C supports arguments -Wformat-security: NO 00:01:29.953 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.953 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.953 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.953 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.953 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.953 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.953 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.953 Compiler for C supports arguments -Wundef: YES 00:01:29.953 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.953 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.953 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.953 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.953 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.953 Program objdump found: YES (/usr/bin/objdump) 00:01:29.953 Compiler for C supports arguments -mavx512f: YES 00:01:29.953 Checking if "AVX512 checking" compiles: YES 00:01:29.953 Fetching value of define "__SSE4_2__" : 1 00:01:29.953 Fetching value of define "__AES__" : 1 00:01:29.953 Fetching value of define "__AVX__" : 1 00:01:29.953 Fetching value of define "__AVX2__" : 1 00:01:29.953 Fetching value of define "__AVX512BW__" : 1 00:01:29.953 Fetching value of define "__AVX512CD__" : 1 00:01:29.953 Fetching value of define "__AVX512DQ__" : 1 00:01:29.953 Fetching value of define "__AVX512F__" : 1 00:01:29.953 Fetching value of define "__AVX512VL__" : 1 00:01:29.953 Fetching value of define "__PCLMUL__" : 1 00:01:29.953 Fetching value of define "__RDRND__" : 1 00:01:29.953 Fetching value of define "__RDSEED__" : 1 00:01:29.953 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:29.953 Fetching value of define "__znver1__" : (undefined) 00:01:29.953 Fetching value of define "__znver2__" : (undefined) 00:01:29.953 Fetching value of define "__znver3__" : (undefined) 00:01:29.953 Fetching value of define "__znver4__" : (undefined) 00:01:29.953 Library asan found: YES 00:01:29.953 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.953 Message: lib/log: Defining dependency "log" 00:01:29.953 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.953 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.953 Library rt found: YES 00:01:29.953 Checking for function "getentropy" : NO 00:01:29.953 Message: lib/eal: Defining dependency "eal" 00:01:29.953 Message: lib/ring: Defining dependency "ring" 00:01:29.953 Message: lib/rcu: Defining dependency "rcu" 00:01:29.953 Message: lib/mempool: Defining dependency "mempool" 00:01:29.953 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.953 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.953 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.953 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.953 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.953 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:29.953 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:29.953 Compiler for C supports arguments -mpclmul: YES 00:01:29.953 Compiler for C supports arguments -maes: YES 00:01:29.953 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.953 Compiler for C supports arguments -mavx512bw: YES 00:01:29.953 Compiler for C supports arguments -mavx512dq: YES 00:01:29.953 Compiler for C supports arguments -mavx512vl: YES 00:01:29.953 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.953 Compiler for C supports arguments -mavx2: YES 00:01:29.953 Compiler for C supports arguments -mavx: YES 00:01:29.953 Message: lib/net: Defining dependency "net" 00:01:29.953 Message: lib/meter: Defining dependency "meter" 00:01:29.953 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.953 Message: lib/pci: Defining dependency "pci" 00:01:29.953 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.953 Message: lib/hash: Defining dependency "hash" 00:01:29.953 Message: lib/timer: Defining dependency "timer" 00:01:29.953 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.953 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.953 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.953 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.953 Message: lib/power: Defining dependency "power" 00:01:29.953 Message: lib/reorder: Defining dependency "reorder" 00:01:29.953 Message: lib/security: Defining dependency "security" 00:01:29.953 Has header "linux/userfaultfd.h" : YES 00:01:29.953 Has header "linux/vduse.h" : YES 00:01:29.953 Message: lib/vhost: Defining dependency "vhost" 00:01:29.953 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.953 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.953 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.953 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.953 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:29.953 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:29.953 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:29.953 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:29.953 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:29.953 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:29.953 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.953 Configuring doxy-api-html.conf using configuration 00:01:29.953 Configuring doxy-api-man.conf using configuration 00:01:29.953 Program mandb found: YES (/usr/bin/mandb) 00:01:29.953 Program sphinx-build found: NO 00:01:29.953 Configuring rte_build_config.h using configuration 00:01:29.953 Message: 00:01:29.953 ================= 00:01:29.953 Applications Enabled 00:01:29.953 ================= 00:01:29.953 00:01:29.953 apps: 00:01:29.953 00:01:29.953 00:01:29.953 Message: 00:01:29.953 ================= 00:01:29.953 Libraries Enabled 00:01:29.953 ================= 00:01:29.953 00:01:29.953 libs: 00:01:29.953 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.953 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:29.953 cryptodev, dmadev, power, reorder, security, vhost, 00:01:29.953 00:01:29.953 Message: 00:01:29.953 =============== 00:01:29.953 Drivers Enabled 00:01:29.953 =============== 00:01:29.953 00:01:29.953 common: 00:01:29.953 00:01:29.953 bus: 00:01:29.953 pci, vdev, 00:01:29.954 mempool: 00:01:29.954 ring, 00:01:29.954 dma: 00:01:29.954 00:01:29.954 net: 00:01:29.954 00:01:29.954 crypto: 00:01:29.954 00:01:29.954 compress: 00:01:29.954 00:01:29.954 vdpa: 00:01:29.954 00:01:29.954 00:01:29.954 Message: 00:01:29.954 ================= 00:01:29.954 Content Skipped 00:01:29.954 ================= 00:01:29.954 00:01:29.954 apps: 00:01:29.954 dumpcap: explicitly disabled via build config 00:01:29.954 graph: explicitly disabled via build config 00:01:29.954 pdump: explicitly disabled via build config 00:01:29.954 proc-info: explicitly disabled via build config 00:01:29.954 test-acl: explicitly disabled via build config 00:01:29.954 test-bbdev: explicitly disabled via build config 00:01:29.954 test-cmdline: explicitly disabled via build config 00:01:29.954 test-compress-perf: explicitly disabled via build config 00:01:29.954 test-crypto-perf: explicitly disabled via build config 00:01:29.954 test-dma-perf: explicitly disabled via build config 00:01:29.954 test-eventdev: explicitly disabled via build config 00:01:29.954 test-fib: explicitly disabled via build config 00:01:29.954 test-flow-perf: explicitly disabled via build config 00:01:29.954 test-gpudev: explicitly disabled via build config 00:01:29.954 test-mldev: explicitly disabled via build config 00:01:29.954 test-pipeline: explicitly disabled via build config 00:01:29.954 test-pmd: explicitly disabled via build config 00:01:29.954 test-regex: explicitly disabled via build config 00:01:29.954 test-sad: explicitly disabled via build config 00:01:29.954 test-security-perf: explicitly disabled via build config 00:01:29.954 00:01:29.954 libs: 00:01:29.954 metrics: explicitly disabled via build config 00:01:29.954 acl: explicitly disabled via build config 00:01:29.954 bbdev: explicitly disabled via build config 00:01:29.954 bitratestats: explicitly disabled via build config 00:01:29.954 bpf: explicitly disabled via build config 00:01:29.954 cfgfile: explicitly disabled via build config 00:01:29.954 distributor: explicitly disabled via build config 00:01:29.954 efd: explicitly disabled via build config 00:01:29.954 eventdev: explicitly disabled via build config 00:01:29.954 dispatcher: explicitly disabled via build config 00:01:29.954 gpudev: explicitly disabled via build config 00:01:29.954 gro: explicitly disabled via build config 00:01:29.954 gso: explicitly disabled via build config 00:01:29.954 ip_frag: explicitly disabled via build config 00:01:29.954 jobstats: explicitly disabled via build config 00:01:29.954 latencystats: explicitly disabled via build config 00:01:29.954 lpm: explicitly disabled via build config 00:01:29.954 member: explicitly disabled via build config 00:01:29.954 pcapng: explicitly disabled via build config 00:01:29.954 rawdev: explicitly disabled via build config 00:01:29.954 regexdev: explicitly disabled via build config 00:01:29.954 mldev: explicitly disabled via build config 00:01:29.954 rib: explicitly disabled via build config 00:01:29.954 sched: explicitly disabled via build config 00:01:29.954 stack: explicitly disabled via build config 00:01:29.954 ipsec: explicitly disabled via build config 00:01:29.954 pdcp: explicitly disabled via build config 00:01:29.954 fib: explicitly disabled via build config 00:01:29.954 port: explicitly disabled via build config 00:01:29.954 pdump: explicitly disabled via build config 00:01:29.954 table: explicitly disabled via build config 00:01:29.954 pipeline: explicitly disabled via build config 00:01:29.954 graph: explicitly disabled via build config 00:01:29.954 node: explicitly disabled via build config 00:01:29.954 00:01:29.954 drivers: 00:01:29.954 common/cpt: not in enabled drivers build config 00:01:29.954 common/dpaax: not in enabled drivers build config 00:01:29.954 common/iavf: not in enabled drivers build config 00:01:29.954 common/idpf: not in enabled drivers build config 00:01:29.954 common/mvep: not in enabled drivers build config 00:01:29.954 common/octeontx: not in enabled drivers build config 00:01:29.954 bus/auxiliary: not in enabled drivers build config 00:01:29.954 bus/cdx: not in enabled drivers build config 00:01:29.954 bus/dpaa: not in enabled drivers build config 00:01:29.954 bus/fslmc: not in enabled drivers build config 00:01:29.954 bus/ifpga: not in enabled drivers build config 00:01:29.954 bus/platform: not in enabled drivers build config 00:01:29.954 bus/vmbus: not in enabled drivers build config 00:01:29.954 common/cnxk: not in enabled drivers build config 00:01:29.954 common/mlx5: not in enabled drivers build config 00:01:29.954 common/nfp: not in enabled drivers build config 00:01:29.954 common/qat: not in enabled drivers build config 00:01:29.954 common/sfc_efx: not in enabled drivers build config 00:01:29.954 mempool/bucket: not in enabled drivers build config 00:01:29.954 mempool/cnxk: not in enabled drivers build config 00:01:29.954 mempool/dpaa: not in enabled drivers build config 00:01:29.954 mempool/dpaa2: not in enabled drivers build config 00:01:29.954 mempool/octeontx: not in enabled drivers build config 00:01:29.954 mempool/stack: not in enabled drivers build config 00:01:29.954 dma/cnxk: not in enabled drivers build config 00:01:29.954 dma/dpaa: not in enabled drivers build config 00:01:29.954 dma/dpaa2: not in enabled drivers build config 00:01:29.954 dma/hisilicon: not in enabled drivers build config 00:01:29.954 dma/idxd: not in enabled drivers build config 00:01:29.954 dma/ioat: not in enabled drivers build config 00:01:29.954 dma/skeleton: not in enabled drivers build config 00:01:29.954 net/af_packet: not in enabled drivers build config 00:01:29.954 net/af_xdp: not in enabled drivers build config 00:01:29.954 net/ark: not in enabled drivers build config 00:01:29.954 net/atlantic: not in enabled drivers build config 00:01:29.954 net/avp: not in enabled drivers build config 00:01:29.954 net/axgbe: not in enabled drivers build config 00:01:29.954 net/bnx2x: not in enabled drivers build config 00:01:29.954 net/bnxt: not in enabled drivers build config 00:01:29.954 net/bonding: not in enabled drivers build config 00:01:29.954 net/cnxk: not in enabled drivers build config 00:01:29.954 net/cpfl: not in enabled drivers build config 00:01:29.954 net/cxgbe: not in enabled drivers build config 00:01:29.954 net/dpaa: not in enabled drivers build config 00:01:29.954 net/dpaa2: not in enabled drivers build config 00:01:29.954 net/e1000: not in enabled drivers build config 00:01:29.954 net/ena: not in enabled drivers build config 00:01:29.954 net/enetc: not in enabled drivers build config 00:01:29.954 net/enetfec: not in enabled drivers build config 00:01:29.954 net/enic: not in enabled drivers build config 00:01:29.954 net/failsafe: not in enabled drivers build config 00:01:29.954 net/fm10k: not in enabled drivers build config 00:01:29.954 net/gve: not in enabled drivers build config 00:01:29.954 net/hinic: not in enabled drivers build config 00:01:29.954 net/hns3: not in enabled drivers build config 00:01:29.954 net/i40e: not in enabled drivers build config 00:01:29.954 net/iavf: not in enabled drivers build config 00:01:29.954 net/ice: not in enabled drivers build config 00:01:29.954 net/idpf: not in enabled drivers build config 00:01:29.954 net/igc: not in enabled drivers build config 00:01:29.954 net/ionic: not in enabled drivers build config 00:01:29.954 net/ipn3ke: not in enabled drivers build config 00:01:29.954 net/ixgbe: not in enabled drivers build config 00:01:29.954 net/mana: not in enabled drivers build config 00:01:29.954 net/memif: not in enabled drivers build config 00:01:29.954 net/mlx4: not in enabled drivers build config 00:01:29.954 net/mlx5: not in enabled drivers build config 00:01:29.954 net/mvneta: not in enabled drivers build config 00:01:29.954 net/mvpp2: not in enabled drivers build config 00:01:29.954 net/netvsc: not in enabled drivers build config 00:01:29.954 net/nfb: not in enabled drivers build config 00:01:29.954 net/nfp: not in enabled drivers build config 00:01:29.954 net/ngbe: not in enabled drivers build config 00:01:29.954 net/null: not in enabled drivers build config 00:01:29.954 net/octeontx: not in enabled drivers build config 00:01:29.954 net/octeon_ep: not in enabled drivers build config 00:01:29.954 net/pcap: not in enabled drivers build config 00:01:29.954 net/pfe: not in enabled drivers build config 00:01:29.954 net/qede: not in enabled drivers build config 00:01:29.954 net/ring: not in enabled drivers build config 00:01:29.954 net/sfc: not in enabled drivers build config 00:01:29.954 net/softnic: not in enabled drivers build config 00:01:29.954 net/tap: not in enabled drivers build config 00:01:29.954 net/thunderx: not in enabled drivers build config 00:01:29.954 net/txgbe: not in enabled drivers build config 00:01:29.954 net/vdev_netvsc: not in enabled drivers build config 00:01:29.954 net/vhost: not in enabled drivers build config 00:01:29.954 net/virtio: not in enabled drivers build config 00:01:29.954 net/vmxnet3: not in enabled drivers build config 00:01:29.954 raw/*: missing internal dependency, "rawdev" 00:01:29.954 crypto/armv8: not in enabled drivers build config 00:01:29.954 crypto/bcmfs: not in enabled drivers build config 00:01:29.954 crypto/caam_jr: not in enabled drivers build config 00:01:29.954 crypto/ccp: not in enabled drivers build config 00:01:29.954 crypto/cnxk: not in enabled drivers build config 00:01:29.954 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.954 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.954 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.954 crypto/mlx5: not in enabled drivers build config 00:01:29.954 crypto/mvsam: not in enabled drivers build config 00:01:29.954 crypto/nitrox: not in enabled drivers build config 00:01:29.954 crypto/null: not in enabled drivers build config 00:01:29.954 crypto/octeontx: not in enabled drivers build config 00:01:29.954 crypto/openssl: not in enabled drivers build config 00:01:29.954 crypto/scheduler: not in enabled drivers build config 00:01:29.954 crypto/uadk: not in enabled drivers build config 00:01:29.954 crypto/virtio: not in enabled drivers build config 00:01:29.954 compress/isal: not in enabled drivers build config 00:01:29.954 compress/mlx5: not in enabled drivers build config 00:01:29.954 compress/octeontx: not in enabled drivers build config 00:01:29.954 compress/zlib: not in enabled drivers build config 00:01:29.954 regex/*: missing internal dependency, "regexdev" 00:01:29.954 ml/*: missing internal dependency, "mldev" 00:01:29.954 vdpa/ifc: not in enabled drivers build config 00:01:29.954 vdpa/mlx5: not in enabled drivers build config 00:01:29.954 vdpa/nfp: not in enabled drivers build config 00:01:29.954 vdpa/sfc: not in enabled drivers build config 00:01:29.954 event/*: missing internal dependency, "eventdev" 00:01:29.954 baseband/*: missing internal dependency, "bbdev" 00:01:29.954 gpu/*: missing internal dependency, "gpudev" 00:01:29.954 00:01:29.955 00:01:29.955 Build targets in project: 84 00:01:29.955 00:01:29.955 DPDK 23.11.0 00:01:29.955 00:01:29.955 User defined options 00:01:29.955 buildtype : debug 00:01:29.955 default_library : shared 00:01:29.955 libdir : lib 00:01:29.955 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:29.955 b_sanitize : address 00:01:29.955 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:29.955 c_link_args : 00:01:29.955 cpu_instruction_set: native 00:01:29.955 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:29.955 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:29.955 enable_docs : false 00:01:29.955 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:29.955 enable_kmods : false 00:01:29.955 tests : false 00:01:29.955 00:01:29.955 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.955 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:30.218 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.218 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.218 [3/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:30.218 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.218 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.218 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.218 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.218 [8/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.218 [9/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.218 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.218 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.218 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:30.218 [13/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.218 [14/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.218 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:30.218 [16/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.218 [17/264] Linking static target lib/librte_kvargs.a 00:01:30.218 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.218 [19/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.218 [20/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.218 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.218 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:30.218 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.477 [24/264] Linking static target lib/librte_log.a 00:01:30.477 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.477 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.477 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.477 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.477 [29/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.477 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.477 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.477 [32/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:30.477 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:30.477 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.477 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.477 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.477 [37/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.477 [38/264] Linking static target lib/librte_pci.a 00:01:30.735 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.735 [40/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.735 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.735 [42/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:30.735 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.735 [44/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.735 [45/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.735 [46/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:30.735 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.735 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.735 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.735 [50/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:30.735 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.735 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.735 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.735 [54/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:30.735 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.735 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.735 [57/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:30.735 [58/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:30.735 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.735 [60/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:30.735 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.735 [62/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:30.735 [63/264] Linking static target lib/librte_telemetry.a 00:01:30.735 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.735 [65/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:30.735 [66/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:30.735 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.735 [68/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:30.735 [69/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:30.735 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.735 [71/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:30.735 [72/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:30.735 [73/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:30.735 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.735 [75/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:30.735 [76/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:30.735 [77/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:30.735 [78/264] Linking static target lib/librte_cmdline.a 00:01:30.735 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:30.735 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:30.735 [81/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.735 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:30.735 [83/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:30.735 [84/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:30.735 [85/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:30.735 [86/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:30.735 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.735 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.735 [89/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:30.735 [90/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:30.735 [91/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:30.735 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.735 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:30.735 [94/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:30.735 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.735 [96/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:30.735 [97/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:30.993 [98/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:30.993 [99/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.993 [100/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:30.993 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.993 [102/264] Linking static target lib/librte_meter.a 00:01:30.993 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.993 [104/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.993 [105/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:30.993 [106/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.993 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.993 [108/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.993 [109/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:30.993 [110/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:30.993 [111/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:30.993 [112/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.993 [113/264] Linking static target lib/librte_ring.a 00:01:30.993 [114/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.993 [115/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.993 [116/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:30.994 [117/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:30.994 [118/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:30.994 [119/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:30.994 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:30.994 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.994 [122/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:30.994 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:30.994 [124/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.994 [125/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:30.994 [126/264] Linking static target lib/librte_rcu.a 00:01:30.994 [127/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:30.994 [128/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:30.994 [129/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:30.994 [130/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.994 [131/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.994 [132/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:30.994 [133/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:30.994 [134/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:30.994 [135/264] Linking target lib/librte_log.so.24.0 00:01:30.994 [136/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:30.994 [137/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:30.994 [138/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:30.994 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:30.994 [140/264] Linking static target lib/librte_timer.a 00:01:30.994 [141/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.994 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:30.994 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:30.994 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:30.994 [145/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:30.994 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:30.994 [147/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.994 [148/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.994 [149/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.994 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:30.994 [151/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:30.994 [152/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:30.994 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:30.994 [154/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:30.994 [155/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:30.994 [156/264] Linking static target lib/librte_compressdev.a 00:01:30.994 [157/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:30.994 [158/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:30.994 [159/264] Linking static target lib/librte_eal.a 00:01:30.994 [160/264] Linking static target lib/librte_net.a 00:01:30.994 [161/264] Linking static target lib/librte_power.a 00:01:30.994 [162/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.994 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:30.994 [164/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:30.994 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:30.994 [166/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:30.994 [167/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:30.994 [168/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:30.994 [169/264] Linking target lib/librte_kvargs.so.24.0 00:01:30.994 [170/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:30.994 [171/264] Linking target lib/librte_telemetry.so.24.0 00:01:30.994 [172/264] Linking static target lib/librte_mempool.a 00:01:30.994 [173/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:30.994 [174/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:30.994 [175/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.994 [176/264] Linking static target lib/librte_dmadev.a 00:01:31.251 [177/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:31.251 [178/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:31.251 [179/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.251 [180/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.251 [181/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.251 [182/264] Linking static target drivers/librte_bus_vdev.a 00:01:31.251 [183/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.251 [184/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.251 [185/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:31.251 [186/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:31.251 [187/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.251 [188/264] Linking static target lib/librte_reorder.a 00:01:31.251 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.251 [190/264] Linking static target lib/librte_security.a 00:01:31.251 [191/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:31.252 [192/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.252 [193/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.252 [194/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.252 [195/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.252 [196/264] Linking static target drivers/librte_mempool_ring.a 00:01:31.252 [197/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.252 [198/264] Linking static target lib/librte_mbuf.a 00:01:31.252 [199/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:31.252 [200/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.252 [201/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.252 [202/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.252 [203/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.252 [204/264] Linking static target drivers/librte_bus_pci.a 00:01:31.510 [205/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [206/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [207/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:31.510 [208/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.510 [209/264] Linking static target lib/librte_hash.a 00:01:31.510 [210/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [211/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [213/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.510 [214/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.767 [215/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:31.767 [216/264] Linking static target lib/librte_cryptodev.a 00:01:31.767 [217/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.767 [218/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:31.767 [219/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.024 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.589 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.589 [222/264] Linking static target lib/librte_ethdev.a 00:01:32.589 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:33.155 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.054 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:35.054 [226/264] Linking static target lib/librte_vhost.a 00:01:36.429 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.805 [228/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.805 [229/264] Linking target lib/librte_eal.so.24.0 00:01:37.805 [230/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:37.805 [231/264] Linking target lib/librte_pci.so.24.0 00:01:37.805 [232/264] Linking target lib/librte_ring.so.24.0 00:01:37.805 [233/264] Linking target lib/librte_meter.so.24.0 00:01:37.805 [234/264] Linking target lib/librte_timer.so.24.0 00:01:37.805 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:37.805 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:37.805 [237/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.805 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:37.805 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:37.805 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:37.805 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:37.805 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:38.064 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:38.064 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:38.064 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:38.064 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:38.064 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:38.064 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:38.064 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:38.064 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:38.064 [251/264] Linking target lib/librte_net.so.24.0 00:01:38.064 [252/264] Linking target lib/librte_compressdev.so.24.0 00:01:38.064 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:38.064 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:38.324 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:38.324 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:38.324 [257/264] Linking target lib/librte_hash.so.24.0 00:01:38.324 [258/264] Linking target lib/librte_security.so.24.0 00:01:38.324 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:38.324 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:38.324 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:38.324 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:38.324 [263/264] Linking target lib/librte_power.so.24.0 00:01:38.582 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:38.582 INFO: autodetecting backend as ninja 00:01:38.582 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:39.147 CC lib/ut_mock/mock.o 00:01:39.147 CC lib/log/log_deprecated.o 00:01:39.147 CC lib/log/log.o 00:01:39.147 CC lib/log/log_flags.o 00:01:39.147 CC lib/ut/ut.o 00:01:39.147 LIB libspdk_ut_mock.a 00:01:39.147 SO libspdk_ut_mock.so.5.0 00:01:39.147 LIB libspdk_log.a 00:01:39.147 LIB libspdk_ut.a 00:01:39.147 SO libspdk_log.so.6.1 00:01:39.147 SYMLINK libspdk_ut_mock.so 00:01:39.147 SO libspdk_ut.so.1.0 00:01:39.406 SYMLINK libspdk_ut.so 00:01:39.406 SYMLINK libspdk_log.so 00:01:39.406 CC lib/dma/dma.o 00:01:39.406 CC lib/util/bit_array.o 00:01:39.406 CC lib/util/cpuset.o 00:01:39.406 CC lib/ioat/ioat.o 00:01:39.406 CC lib/util/crc16.o 00:01:39.406 CC lib/util/base64.o 00:01:39.406 CC lib/util/crc32.o 00:01:39.406 CC lib/util/crc32c.o 00:01:39.406 CC lib/util/fd.o 00:01:39.406 CC lib/util/crc32_ieee.o 00:01:39.406 CC lib/util/crc64.o 00:01:39.406 CC lib/util/dif.o 00:01:39.406 CC lib/util/file.o 00:01:39.406 CC lib/util/hexlify.o 00:01:39.406 CC lib/util/iov.o 00:01:39.406 CC lib/util/pipe.o 00:01:39.406 CC lib/util/math.o 00:01:39.406 CC lib/util/string.o 00:01:39.406 CC lib/util/strerror_tls.o 00:01:39.406 CC lib/util/uuid.o 00:01:39.406 CC lib/util/fd_group.o 00:01:39.406 CC lib/util/xor.o 00:01:39.406 CC lib/util/zipf.o 00:01:39.406 CXX lib/trace_parser/trace.o 00:01:39.665 CC lib/vfio_user/host/vfio_user_pci.o 00:01:39.665 CC lib/vfio_user/host/vfio_user.o 00:01:39.665 LIB libspdk_dma.a 00:01:39.665 SO libspdk_dma.so.3.0 00:01:39.665 LIB libspdk_ioat.a 00:01:39.665 SO libspdk_ioat.so.6.0 00:01:39.665 SYMLINK libspdk_dma.so 00:01:39.665 LIB libspdk_vfio_user.a 00:01:39.665 SO libspdk_vfio_user.so.4.0 00:01:39.665 SYMLINK libspdk_ioat.so 00:01:39.924 SYMLINK libspdk_vfio_user.so 00:01:39.924 LIB libspdk_util.a 00:01:39.924 SO libspdk_util.so.8.0 00:01:40.182 LIB libspdk_trace_parser.a 00:01:40.182 SO libspdk_trace_parser.so.4.0 00:01:40.182 SYMLINK libspdk_util.so 00:01:40.182 SYMLINK libspdk_trace_parser.so 00:01:40.182 CC lib/vmd/vmd.o 00:01:40.182 CC lib/vmd/led.o 00:01:40.182 CC lib/rdma/common.o 00:01:40.182 CC lib/rdma/rdma_verbs.o 00:01:40.182 CC lib/conf/conf.o 00:01:40.182 CC lib/idxd/idxd_user.o 00:01:40.182 CC lib/idxd/idxd.o 00:01:40.182 CC lib/json/json_parse.o 00:01:40.182 CC lib/json/json_util.o 00:01:40.182 CC lib/env_dpdk/pci.o 00:01:40.182 CC lib/json/json_write.o 00:01:40.182 CC lib/env_dpdk/init.o 00:01:40.182 CC lib/env_dpdk/env.o 00:01:40.182 CC lib/env_dpdk/memory.o 00:01:40.182 CC lib/env_dpdk/threads.o 00:01:40.182 CC lib/env_dpdk/pci_ioat.o 00:01:40.182 CC lib/env_dpdk/pci_virtio.o 00:01:40.182 CC lib/env_dpdk/pci_idxd.o 00:01:40.182 CC lib/env_dpdk/pci_vmd.o 00:01:40.182 CC lib/env_dpdk/pci_event.o 00:01:40.182 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:40.182 CC lib/env_dpdk/pci_dpdk.o 00:01:40.182 CC lib/env_dpdk/sigbus_handler.o 00:01:40.182 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:40.440 LIB libspdk_rdma.a 00:01:40.440 LIB libspdk_conf.a 00:01:40.440 SO libspdk_rdma.so.5.0 00:01:40.440 LIB libspdk_json.a 00:01:40.440 SO libspdk_conf.so.5.0 00:01:40.440 SO libspdk_json.so.5.1 00:01:40.440 SYMLINK libspdk_conf.so 00:01:40.440 SYMLINK libspdk_rdma.so 00:01:40.698 SYMLINK libspdk_json.so 00:01:40.698 LIB libspdk_idxd.a 00:01:40.698 CC lib/jsonrpc/jsonrpc_server.o 00:01:40.698 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:40.698 CC lib/jsonrpc/jsonrpc_client.o 00:01:40.698 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:40.698 LIB libspdk_vmd.a 00:01:40.698 SO libspdk_idxd.so.11.0 00:01:40.698 SO libspdk_vmd.so.5.0 00:01:40.698 SYMLINK libspdk_idxd.so 00:01:40.698 SYMLINK libspdk_vmd.so 00:01:40.957 LIB libspdk_jsonrpc.a 00:01:40.957 SO libspdk_jsonrpc.so.5.1 00:01:40.957 SYMLINK libspdk_jsonrpc.so 00:01:41.214 CC lib/rpc/rpc.o 00:01:41.214 LIB libspdk_env_dpdk.a 00:01:41.214 SO libspdk_env_dpdk.so.13.0 00:01:41.214 LIB libspdk_rpc.a 00:01:41.473 SO libspdk_rpc.so.5.0 00:01:41.473 SYMLINK libspdk_rpc.so 00:01:41.473 SYMLINK libspdk_env_dpdk.so 00:01:41.473 CC lib/sock/sock.o 00:01:41.473 CC lib/sock/sock_rpc.o 00:01:41.473 CC lib/trace/trace.o 00:01:41.473 CC lib/trace/trace_flags.o 00:01:41.473 CC lib/trace/trace_rpc.o 00:01:41.473 CC lib/notify/notify.o 00:01:41.473 CC lib/notify/notify_rpc.o 00:01:41.740 LIB libspdk_notify.a 00:01:41.740 LIB libspdk_trace.a 00:01:41.740 SO libspdk_notify.so.5.0 00:01:41.740 SO libspdk_trace.so.9.0 00:01:41.740 SYMLINK libspdk_notify.so 00:01:41.740 SYMLINK libspdk_trace.so 00:01:41.740 LIB libspdk_sock.a 00:01:41.740 SO libspdk_sock.so.8.0 00:01:41.999 SYMLINK libspdk_sock.so 00:01:41.999 CC lib/thread/iobuf.o 00:01:41.999 CC lib/thread/thread.o 00:01:41.999 CC lib/nvme/nvme_ctrlr.o 00:01:41.999 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:41.999 CC lib/nvme/nvme_ns_cmd.o 00:01:41.999 CC lib/nvme/nvme_fabric.o 00:01:41.999 CC lib/nvme/nvme_ns.o 00:01:41.999 CC lib/nvme/nvme_pcie.o 00:01:41.999 CC lib/nvme/nvme_pcie_common.o 00:01:41.999 CC lib/nvme/nvme_qpair.o 00:01:41.999 CC lib/nvme/nvme_transport.o 00:01:41.999 CC lib/nvme/nvme_quirks.o 00:01:41.999 CC lib/nvme/nvme.o 00:01:41.999 CC lib/nvme/nvme_discovery.o 00:01:41.999 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:41.999 CC lib/nvme/nvme_tcp.o 00:01:41.999 CC lib/nvme/nvme_opal.o 00:01:41.999 CC lib/nvme/nvme_io_msg.o 00:01:41.999 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:41.999 CC lib/nvme/nvme_zns.o 00:01:41.999 CC lib/nvme/nvme_poll_group.o 00:01:41.999 CC lib/nvme/nvme_rdma.o 00:01:41.999 CC lib/nvme/nvme_cuse.o 00:01:41.999 CC lib/nvme/nvme_vfio_user.o 00:01:42.934 LIB libspdk_thread.a 00:01:42.934 SO libspdk_thread.so.9.0 00:01:42.934 SYMLINK libspdk_thread.so 00:01:43.191 CC lib/init/subsystem.o 00:01:43.191 CC lib/init/json_config.o 00:01:43.191 CC lib/init/subsystem_rpc.o 00:01:43.191 CC lib/init/rpc.o 00:01:43.191 CC lib/accel/accel_rpc.o 00:01:43.191 CC lib/accel/accel.o 00:01:43.191 CC lib/accel/accel_sw.o 00:01:43.191 CC lib/blob/request.o 00:01:43.191 CC lib/blob/blobstore.o 00:01:43.191 CC lib/blob/zeroes.o 00:01:43.191 CC lib/blob/blob_bs_dev.o 00:01:43.191 CC lib/virtio/virtio.o 00:01:43.191 CC lib/virtio/virtio_vfio_user.o 00:01:43.191 CC lib/virtio/virtio_vhost_user.o 00:01:43.191 CC lib/virtio/virtio_pci.o 00:01:43.450 LIB libspdk_init.a 00:01:43.450 SO libspdk_init.so.4.0 00:01:43.450 SYMLINK libspdk_init.so 00:01:43.450 LIB libspdk_virtio.a 00:01:43.450 SO libspdk_virtio.so.6.0 00:01:43.709 SYMLINK libspdk_virtio.so 00:01:43.709 CC lib/event/app.o 00:01:43.709 CC lib/event/reactor.o 00:01:43.709 CC lib/event/log_rpc.o 00:01:43.709 CC lib/event/scheduler_static.o 00:01:43.709 CC lib/event/app_rpc.o 00:01:43.966 LIB libspdk_event.a 00:01:43.966 SO libspdk_event.so.12.0 00:01:43.966 SYMLINK libspdk_event.so 00:01:44.224 LIB libspdk_nvme.a 00:01:44.224 SO libspdk_nvme.so.12.0 00:01:44.224 LIB libspdk_accel.a 00:01:44.482 SO libspdk_accel.so.14.0 00:01:44.482 SYMLINK libspdk_accel.so 00:01:44.482 SYMLINK libspdk_nvme.so 00:01:44.482 CC lib/bdev/bdev_zone.o 00:01:44.482 CC lib/bdev/bdev.o 00:01:44.482 CC lib/bdev/bdev_rpc.o 00:01:44.482 CC lib/bdev/scsi_nvme.o 00:01:44.482 CC lib/bdev/part.o 00:01:45.859 LIB libspdk_blob.a 00:01:45.859 SO libspdk_blob.so.10.1 00:01:45.859 SYMLINK libspdk_blob.so 00:01:45.859 CC lib/blobfs/blobfs.o 00:01:45.859 CC lib/blobfs/tree.o 00:01:45.859 CC lib/lvol/lvol.o 00:01:46.425 LIB libspdk_bdev.a 00:01:46.425 SO libspdk_bdev.so.14.0 00:01:46.425 LIB libspdk_blobfs.a 00:01:46.425 SYMLINK libspdk_bdev.so 00:01:46.683 LIB libspdk_lvol.a 00:01:46.683 SO libspdk_blobfs.so.9.0 00:01:46.683 SO libspdk_lvol.so.9.1 00:01:46.683 SYMLINK libspdk_blobfs.so 00:01:46.683 SYMLINK libspdk_lvol.so 00:01:46.683 CC lib/nbd/nbd.o 00:01:46.683 CC lib/nbd/nbd_rpc.o 00:01:46.683 CC lib/scsi/dev.o 00:01:46.683 CC lib/nvmf/ctrlr.o 00:01:46.683 CC lib/nvmf/ctrlr_discovery.o 00:01:46.683 CC lib/scsi/lun.o 00:01:46.683 CC lib/nvmf/ctrlr_bdev.o 00:01:46.683 CC lib/nvmf/subsystem.o 00:01:46.683 CC lib/nvmf/nvmf_rpc.o 00:01:46.683 CC lib/nvmf/transport.o 00:01:46.683 CC lib/nvmf/nvmf.o 00:01:46.683 CC lib/scsi/scsi.o 00:01:46.683 CC lib/scsi/port.o 00:01:46.683 CC lib/nvmf/tcp.o 00:01:46.683 CC lib/nvmf/rdma.o 00:01:46.683 CC lib/scsi/scsi_pr.o 00:01:46.683 CC lib/ftl/ftl_core.o 00:01:46.683 CC lib/scsi/scsi_bdev.o 00:01:46.683 CC lib/ftl/ftl_layout.o 00:01:46.683 CC lib/ftl/ftl_init.o 00:01:46.683 CC lib/scsi/scsi_rpc.o 00:01:46.683 CC lib/scsi/task.o 00:01:46.683 CC lib/ftl/ftl_io.o 00:01:46.683 CC lib/ftl/ftl_sb.o 00:01:46.683 CC lib/ftl/ftl_l2p.o 00:01:46.683 CC lib/ftl/ftl_debug.o 00:01:46.683 CC lib/ftl/ftl_l2p_flat.o 00:01:46.683 CC lib/ftl/ftl_nv_cache.o 00:01:46.683 CC lib/ftl/ftl_band.o 00:01:46.683 CC lib/ftl/ftl_band_ops.o 00:01:46.683 CC lib/ftl/ftl_writer.o 00:01:46.683 CC lib/ftl/ftl_rq.o 00:01:46.683 CC lib/ftl/ftl_reloc.o 00:01:46.683 CC lib/ftl/ftl_l2p_cache.o 00:01:46.683 CC lib/ftl/ftl_p2l.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:46.683 CC lib/ublk/ublk.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:46.683 CC lib/ublk/ublk_rpc.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:46.683 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:46.683 CC lib/ftl/utils/ftl_conf.o 00:01:46.683 CC lib/ftl/utils/ftl_md.o 00:01:46.683 CC lib/ftl/utils/ftl_bitmap.o 00:01:46.683 CC lib/ftl/utils/ftl_mempool.o 00:01:46.683 CC lib/ftl/utils/ftl_property.o 00:01:46.683 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:46.683 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:46.683 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:46.683 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:46.683 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:46.683 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:46.683 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:46.683 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:46.683 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:46.683 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:46.683 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.683 CC lib/ftl/base/ftl_base_dev.o 00:01:46.683 CC lib/ftl/ftl_trace.o 00:01:47.263 LIB libspdk_nbd.a 00:01:47.263 SO libspdk_nbd.so.6.0 00:01:47.263 SYMLINK libspdk_nbd.so 00:01:47.263 LIB libspdk_scsi.a 00:01:47.615 SO libspdk_scsi.so.8.0 00:01:47.615 SYMLINK libspdk_scsi.so 00:01:47.615 LIB libspdk_ublk.a 00:01:47.615 SO libspdk_ublk.so.2.0 00:01:47.615 LIB libspdk_ftl.a 00:01:47.615 SYMLINK libspdk_ublk.so 00:01:47.615 CC lib/vhost/vhost_rpc.o 00:01:47.615 CC lib/vhost/vhost.o 00:01:47.615 CC lib/vhost/rte_vhost_user.o 00:01:47.615 CC lib/vhost/vhost_scsi.o 00:01:47.615 CC lib/vhost/vhost_blk.o 00:01:47.615 CC lib/iscsi/conn.o 00:01:47.615 CC lib/iscsi/init_grp.o 00:01:47.615 CC lib/iscsi/iscsi.o 00:01:47.615 CC lib/iscsi/md5.o 00:01:47.615 CC lib/iscsi/portal_grp.o 00:01:47.615 CC lib/iscsi/param.o 00:01:47.615 CC lib/iscsi/tgt_node.o 00:01:47.615 CC lib/iscsi/iscsi_subsystem.o 00:01:47.615 CC lib/iscsi/iscsi_rpc.o 00:01:47.615 CC lib/iscsi/task.o 00:01:47.615 SO libspdk_ftl.so.8.0 00:01:47.927 SYMLINK libspdk_ftl.so 00:01:48.868 LIB libspdk_vhost.a 00:01:48.868 SO libspdk_vhost.so.7.1 00:01:48.868 LIB libspdk_nvmf.a 00:01:48.868 SO libspdk_nvmf.so.17.0 00:01:48.868 SYMLINK libspdk_vhost.so 00:01:49.128 LIB libspdk_iscsi.a 00:01:49.128 SO libspdk_iscsi.so.7.0 00:01:49.128 SYMLINK libspdk_nvmf.so 00:01:49.391 SYMLINK libspdk_iscsi.so 00:01:49.652 CC module/env_dpdk/env_dpdk_rpc.o 00:01:49.652 CC module/blob/bdev/blob_bdev.o 00:01:49.652 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:49.652 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:49.652 CC module/scheduler/gscheduler/gscheduler.o 00:01:49.652 CC module/accel/iaa/accel_iaa.o 00:01:49.652 CC module/accel/iaa/accel_iaa_rpc.o 00:01:49.652 CC module/accel/error/accel_error.o 00:01:49.652 CC module/sock/posix/posix.o 00:01:49.652 CC module/accel/error/accel_error_rpc.o 00:01:49.652 CC module/accel/ioat/accel_ioat.o 00:01:49.652 CC module/accel/ioat/accel_ioat_rpc.o 00:01:49.652 CC module/accel/dsa/accel_dsa_rpc.o 00:01:49.652 CC module/accel/dsa/accel_dsa.o 00:01:49.652 LIB libspdk_env_dpdk_rpc.a 00:01:49.652 LIB libspdk_scheduler_gscheduler.a 00:01:49.652 SO libspdk_env_dpdk_rpc.so.5.0 00:01:49.652 LIB libspdk_scheduler_dynamic.a 00:01:49.652 SO libspdk_scheduler_gscheduler.so.3.0 00:01:49.652 LIB libspdk_scheduler_dpdk_governor.a 00:01:49.652 SO libspdk_scheduler_dynamic.so.3.0 00:01:49.652 LIB libspdk_accel_ioat.a 00:01:49.652 LIB libspdk_accel_error.a 00:01:49.652 SYMLINK libspdk_env_dpdk_rpc.so 00:01:49.652 LIB libspdk_accel_iaa.a 00:01:49.652 SYMLINK libspdk_scheduler_gscheduler.so 00:01:49.910 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:49.910 SO libspdk_accel_error.so.1.0 00:01:49.910 SO libspdk_accel_ioat.so.5.0 00:01:49.910 SO libspdk_accel_iaa.so.2.0 00:01:49.910 LIB libspdk_blob_bdev.a 00:01:49.910 LIB libspdk_accel_dsa.a 00:01:49.910 SO libspdk_blob_bdev.so.10.1 00:01:49.910 SYMLINK libspdk_scheduler_dynamic.so 00:01:49.910 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:49.910 SYMLINK libspdk_accel_error.so 00:01:49.910 SYMLINK libspdk_accel_iaa.so 00:01:49.910 SO libspdk_accel_dsa.so.4.0 00:01:49.910 SYMLINK libspdk_accel_ioat.so 00:01:49.910 SYMLINK libspdk_blob_bdev.so 00:01:49.910 SYMLINK libspdk_accel_dsa.so 00:01:50.167 CC module/bdev/null/bdev_null.o 00:01:50.168 CC module/bdev/null/bdev_null_rpc.o 00:01:50.168 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:50.168 CC module/bdev/lvol/vbdev_lvol.o 00:01:50.168 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:50.168 CC module/bdev/delay/vbdev_delay.o 00:01:50.168 CC module/blobfs/bdev/blobfs_bdev.o 00:01:50.168 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:50.168 CC module/bdev/gpt/gpt.o 00:01:50.168 CC module/bdev/passthru/vbdev_passthru.o 00:01:50.168 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:50.168 CC module/bdev/gpt/vbdev_gpt.o 00:01:50.168 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:50.168 CC module/bdev/aio/bdev_aio_rpc.o 00:01:50.168 CC module/bdev/nvme/bdev_nvme.o 00:01:50.168 CC module/bdev/nvme/nvme_rpc.o 00:01:50.168 CC module/bdev/nvme/bdev_mdns_client.o 00:01:50.168 CC module/bdev/aio/bdev_aio.o 00:01:50.168 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:50.168 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:50.168 CC module/bdev/nvme/vbdev_opal.o 00:01:50.168 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:50.168 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:50.168 CC module/bdev/error/vbdev_error.o 00:01:50.168 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:50.168 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:50.168 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:50.168 CC module/bdev/raid/bdev_raid.o 00:01:50.168 CC module/bdev/error/vbdev_error_rpc.o 00:01:50.168 CC module/bdev/raid/bdev_raid_rpc.o 00:01:50.168 CC module/bdev/malloc/bdev_malloc.o 00:01:50.168 CC module/bdev/raid/raid0.o 00:01:50.168 CC module/bdev/raid/bdev_raid_sb.o 00:01:50.168 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:50.168 CC module/bdev/raid/concat.o 00:01:50.168 CC module/bdev/raid/raid1.o 00:01:50.168 CC module/bdev/split/vbdev_split.o 00:01:50.168 CC module/bdev/split/vbdev_split_rpc.o 00:01:50.168 CC module/bdev/iscsi/bdev_iscsi.o 00:01:50.168 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:50.168 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:50.168 CC module/bdev/ftl/bdev_ftl.o 00:01:50.426 LIB libspdk_blobfs_bdev.a 00:01:50.426 SO libspdk_blobfs_bdev.so.5.0 00:01:50.426 LIB libspdk_bdev_gpt.a 00:01:50.426 LIB libspdk_bdev_null.a 00:01:50.426 LIB libspdk_bdev_split.a 00:01:50.426 SO libspdk_bdev_gpt.so.5.0 00:01:50.426 SYMLINK libspdk_blobfs_bdev.so 00:01:50.426 SO libspdk_bdev_null.so.5.0 00:01:50.426 LIB libspdk_bdev_ftl.a 00:01:50.426 LIB libspdk_bdev_zone_block.a 00:01:50.426 SO libspdk_bdev_split.so.5.0 00:01:50.426 LIB libspdk_sock_posix.a 00:01:50.426 LIB libspdk_bdev_error.a 00:01:50.426 SYMLINK libspdk_bdev_gpt.so 00:01:50.426 SO libspdk_bdev_ftl.so.5.0 00:01:50.426 SYMLINK libspdk_bdev_null.so 00:01:50.426 SO libspdk_bdev_zone_block.so.5.0 00:01:50.426 SO libspdk_sock_posix.so.5.0 00:01:50.426 SO libspdk_bdev_error.so.5.0 00:01:50.426 SYMLINK libspdk_bdev_split.so 00:01:50.426 LIB libspdk_bdev_malloc.a 00:01:50.426 LIB libspdk_bdev_iscsi.a 00:01:50.426 LIB libspdk_bdev_passthru.a 00:01:50.426 SYMLINK libspdk_bdev_ftl.so 00:01:50.426 SO libspdk_bdev_malloc.so.5.0 00:01:50.426 SYMLINK libspdk_bdev_error.so 00:01:50.426 SO libspdk_bdev_iscsi.so.5.0 00:01:50.426 SYMLINK libspdk_bdev_zone_block.so 00:01:50.426 LIB libspdk_bdev_delay.a 00:01:50.426 SO libspdk_bdev_passthru.so.5.0 00:01:50.426 LIB libspdk_bdev_aio.a 00:01:50.426 SYMLINK libspdk_sock_posix.so 00:01:50.426 SO libspdk_bdev_delay.so.5.0 00:01:50.426 SYMLINK libspdk_bdev_malloc.so 00:01:50.426 SO libspdk_bdev_aio.so.5.0 00:01:50.684 SYMLINK libspdk_bdev_iscsi.so 00:01:50.684 SYMLINK libspdk_bdev_passthru.so 00:01:50.684 SYMLINK libspdk_bdev_delay.so 00:01:50.684 LIB libspdk_bdev_virtio.a 00:01:50.684 SYMLINK libspdk_bdev_aio.so 00:01:50.684 SO libspdk_bdev_virtio.so.5.0 00:01:50.684 LIB libspdk_bdev_lvol.a 00:01:50.684 SYMLINK libspdk_bdev_virtio.so 00:01:50.684 SO libspdk_bdev_lvol.so.5.0 00:01:50.684 SYMLINK libspdk_bdev_lvol.so 00:01:51.252 LIB libspdk_bdev_raid.a 00:01:51.252 SO libspdk_bdev_raid.so.5.0 00:01:51.252 SYMLINK libspdk_bdev_raid.so 00:01:51.819 LIB libspdk_bdev_nvme.a 00:01:51.819 SO libspdk_bdev_nvme.so.6.0 00:01:51.819 SYMLINK libspdk_bdev_nvme.so 00:01:52.077 CC module/event/subsystems/vmd/vmd.o 00:01:52.077 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:52.077 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:52.077 CC module/event/subsystems/scheduler/scheduler.o 00:01:52.334 CC module/event/subsystems/sock/sock.o 00:01:52.334 CC module/event/subsystems/iobuf/iobuf.o 00:01:52.334 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:52.334 LIB libspdk_event_vhost_blk.a 00:01:52.334 SO libspdk_event_vhost_blk.so.2.0 00:01:52.334 LIB libspdk_event_vmd.a 00:01:52.334 LIB libspdk_event_sock.a 00:01:52.334 LIB libspdk_event_scheduler.a 00:01:52.334 LIB libspdk_event_iobuf.a 00:01:52.334 SO libspdk_event_sock.so.4.0 00:01:52.334 SO libspdk_event_vmd.so.5.0 00:01:52.334 SYMLINK libspdk_event_vhost_blk.so 00:01:52.334 SO libspdk_event_scheduler.so.3.0 00:01:52.334 SO libspdk_event_iobuf.so.2.0 00:01:52.334 SYMLINK libspdk_event_vmd.so 00:01:52.334 SYMLINK libspdk_event_scheduler.so 00:01:52.334 SYMLINK libspdk_event_sock.so 00:01:52.334 SYMLINK libspdk_event_iobuf.so 00:01:52.591 CC module/event/subsystems/accel/accel.o 00:01:52.591 LIB libspdk_event_accel.a 00:01:52.850 SO libspdk_event_accel.so.5.0 00:01:52.851 SYMLINK libspdk_event_accel.so 00:01:52.851 CC module/event/subsystems/bdev/bdev.o 00:01:53.110 LIB libspdk_event_bdev.a 00:01:53.110 SO libspdk_event_bdev.so.5.0 00:01:53.110 SYMLINK libspdk_event_bdev.so 00:01:53.110 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:53.110 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:53.110 CC module/event/subsystems/scsi/scsi.o 00:01:53.110 CC module/event/subsystems/nbd/nbd.o 00:01:53.110 CC module/event/subsystems/ublk/ublk.o 00:01:53.369 LIB libspdk_event_scsi.a 00:01:53.369 LIB libspdk_event_ublk.a 00:01:53.369 SO libspdk_event_scsi.so.5.0 00:01:53.369 LIB libspdk_event_nbd.a 00:01:53.369 SO libspdk_event_ublk.so.2.0 00:01:53.369 LIB libspdk_event_nvmf.a 00:01:53.369 SYMLINK libspdk_event_scsi.so 00:01:53.369 SO libspdk_event_nbd.so.5.0 00:01:53.369 SO libspdk_event_nvmf.so.5.0 00:01:53.369 SYMLINK libspdk_event_ublk.so 00:01:53.369 SYMLINK libspdk_event_nbd.so 00:01:53.369 SYMLINK libspdk_event_nvmf.so 00:01:53.627 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:53.627 CC module/event/subsystems/iscsi/iscsi.o 00:01:53.627 LIB libspdk_event_vhost_scsi.a 00:01:53.627 LIB libspdk_event_iscsi.a 00:01:53.627 SO libspdk_event_vhost_scsi.so.2.0 00:01:53.627 SO libspdk_event_iscsi.so.5.0 00:01:53.627 SYMLINK libspdk_event_vhost_scsi.so 00:01:53.627 SYMLINK libspdk_event_iscsi.so 00:01:53.886 SO libspdk.so.5.0 00:01:53.886 SYMLINK libspdk.so 00:01:53.886 CXX app/trace/trace.o 00:01:53.886 CC app/spdk_nvme_identify/identify.o 00:01:53.886 CC app/spdk_top/spdk_top.o 00:01:53.886 CC app/spdk_nvme_perf/perf.o 00:01:53.886 CC app/spdk_nvme_discover/discovery_aer.o 00:01:53.886 CC app/spdk_lspci/spdk_lspci.o 00:01:53.886 CC app/trace_record/trace_record.o 00:01:53.886 TEST_HEADER include/spdk/accel.h 00:01:53.886 TEST_HEADER include/spdk/accel_module.h 00:01:53.886 TEST_HEADER include/spdk/assert.h 00:01:54.161 TEST_HEADER include/spdk/barrier.h 00:01:54.161 CC test/rpc_client/rpc_client_test.o 00:01:54.161 TEST_HEADER include/spdk/bdev.h 00:01:54.161 TEST_HEADER include/spdk/base64.h 00:01:54.161 TEST_HEADER include/spdk/bdev_module.h 00:01:54.161 TEST_HEADER include/spdk/bdev_zone.h 00:01:54.161 TEST_HEADER include/spdk/bit_array.h 00:01:54.161 TEST_HEADER include/spdk/bit_pool.h 00:01:54.161 CC app/nvmf_tgt/nvmf_main.o 00:01:54.161 TEST_HEADER include/spdk/blob_bdev.h 00:01:54.161 CC app/spdk_dd/spdk_dd.o 00:01:54.161 CC app/iscsi_tgt/iscsi_tgt.o 00:01:54.161 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:54.161 TEST_HEADER include/spdk/blobfs.h 00:01:54.161 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:54.161 TEST_HEADER include/spdk/conf.h 00:01:54.161 CC app/vhost/vhost.o 00:01:54.161 TEST_HEADER include/spdk/config.h 00:01:54.161 TEST_HEADER include/spdk/cpuset.h 00:01:54.161 TEST_HEADER include/spdk/blob.h 00:01:54.161 TEST_HEADER include/spdk/crc16.h 00:01:54.161 TEST_HEADER include/spdk/crc64.h 00:01:54.161 TEST_HEADER include/spdk/dma.h 00:01:54.161 TEST_HEADER include/spdk/crc32.h 00:01:54.161 TEST_HEADER include/spdk/dif.h 00:01:54.161 TEST_HEADER include/spdk/env_dpdk.h 00:01:54.161 TEST_HEADER include/spdk/endian.h 00:01:54.161 TEST_HEADER include/spdk/env.h 00:01:54.161 TEST_HEADER include/spdk/fd_group.h 00:01:54.161 TEST_HEADER include/spdk/event.h 00:01:54.161 TEST_HEADER include/spdk/fd.h 00:01:54.161 TEST_HEADER include/spdk/file.h 00:01:54.161 TEST_HEADER include/spdk/ftl.h 00:01:54.161 TEST_HEADER include/spdk/hexlify.h 00:01:54.161 TEST_HEADER include/spdk/gpt_spec.h 00:01:54.161 TEST_HEADER include/spdk/histogram_data.h 00:01:54.161 TEST_HEADER include/spdk/idxd.h 00:01:54.161 TEST_HEADER include/spdk/idxd_spec.h 00:01:54.161 TEST_HEADER include/spdk/init.h 00:01:54.161 TEST_HEADER include/spdk/ioat_spec.h 00:01:54.161 TEST_HEADER include/spdk/iscsi_spec.h 00:01:54.161 TEST_HEADER include/spdk/ioat.h 00:01:54.161 TEST_HEADER include/spdk/json.h 00:01:54.161 TEST_HEADER include/spdk/jsonrpc.h 00:01:54.162 TEST_HEADER include/spdk/likely.h 00:01:54.162 TEST_HEADER include/spdk/log.h 00:01:54.162 TEST_HEADER include/spdk/lvol.h 00:01:54.162 TEST_HEADER include/spdk/memory.h 00:01:54.162 TEST_HEADER include/spdk/mmio.h 00:01:54.162 TEST_HEADER include/spdk/nbd.h 00:01:54.162 TEST_HEADER include/spdk/notify.h 00:01:54.162 TEST_HEADER include/spdk/nvme.h 00:01:54.162 CC app/spdk_tgt/spdk_tgt.o 00:01:54.162 TEST_HEADER include/spdk/nvme_intel.h 00:01:54.162 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:54.162 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:54.162 TEST_HEADER include/spdk/nvme_spec.h 00:01:54.162 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:54.162 TEST_HEADER include/spdk/nvme_zns.h 00:01:54.162 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:54.162 TEST_HEADER include/spdk/nvmf_spec.h 00:01:54.162 TEST_HEADER include/spdk/nvmf.h 00:01:54.162 TEST_HEADER include/spdk/opal.h 00:01:54.162 TEST_HEADER include/spdk/nvmf_transport.h 00:01:54.162 TEST_HEADER include/spdk/pci_ids.h 00:01:54.162 TEST_HEADER include/spdk/opal_spec.h 00:01:54.162 TEST_HEADER include/spdk/pipe.h 00:01:54.162 TEST_HEADER include/spdk/queue.h 00:01:54.162 TEST_HEADER include/spdk/reduce.h 00:01:54.162 TEST_HEADER include/spdk/scheduler.h 00:01:54.162 TEST_HEADER include/spdk/rpc.h 00:01:54.162 TEST_HEADER include/spdk/scsi.h 00:01:54.162 TEST_HEADER include/spdk/scsi_spec.h 00:01:54.162 TEST_HEADER include/spdk/sock.h 00:01:54.162 TEST_HEADER include/spdk/stdinc.h 00:01:54.162 TEST_HEADER include/spdk/thread.h 00:01:54.162 TEST_HEADER include/spdk/string.h 00:01:54.162 TEST_HEADER include/spdk/trace.h 00:01:54.162 TEST_HEADER include/spdk/trace_parser.h 00:01:54.162 TEST_HEADER include/spdk/ublk.h 00:01:54.162 TEST_HEADER include/spdk/tree.h 00:01:54.162 TEST_HEADER include/spdk/version.h 00:01:54.162 TEST_HEADER include/spdk/util.h 00:01:54.162 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:54.162 TEST_HEADER include/spdk/vhost.h 00:01:54.162 TEST_HEADER include/spdk/uuid.h 00:01:54.162 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:54.162 TEST_HEADER include/spdk/vmd.h 00:01:54.162 TEST_HEADER include/spdk/xor.h 00:01:54.162 TEST_HEADER include/spdk/zipf.h 00:01:54.162 CXX test/cpp_headers/accel_module.o 00:01:54.162 CXX test/cpp_headers/accel.o 00:01:54.162 CXX test/cpp_headers/barrier.o 00:01:54.162 CXX test/cpp_headers/assert.o 00:01:54.162 CXX test/cpp_headers/base64.o 00:01:54.162 CXX test/cpp_headers/bdev.o 00:01:54.162 CXX test/cpp_headers/bdev_module.o 00:01:54.162 CXX test/cpp_headers/bdev_zone.o 00:01:54.162 CXX test/cpp_headers/bit_array.o 00:01:54.162 CXX test/cpp_headers/bit_pool.o 00:01:54.162 CXX test/cpp_headers/blobfs_bdev.o 00:01:54.162 CXX test/cpp_headers/blob_bdev.o 00:01:54.162 CXX test/cpp_headers/blobfs.o 00:01:54.162 CXX test/cpp_headers/conf.o 00:01:54.162 CXX test/cpp_headers/blob.o 00:01:54.162 CXX test/cpp_headers/cpuset.o 00:01:54.162 CXX test/cpp_headers/config.o 00:01:54.162 CXX test/cpp_headers/crc64.o 00:01:54.162 CXX test/cpp_headers/crc32.o 00:01:54.162 CXX test/cpp_headers/crc16.o 00:01:54.162 CXX test/cpp_headers/dif.o 00:01:54.162 CC app/fio/nvme/fio_plugin.o 00:01:54.162 CC examples/accel/perf/accel_perf.o 00:01:54.162 CXX test/cpp_headers/endian.o 00:01:54.162 CXX test/cpp_headers/dma.o 00:01:54.162 CXX test/cpp_headers/env_dpdk.o 00:01:54.162 CXX test/cpp_headers/fd_group.o 00:01:54.162 CXX test/cpp_headers/event.o 00:01:54.162 CXX test/cpp_headers/env.o 00:01:54.162 CC examples/ioat/perf/perf.o 00:01:54.162 CC examples/nvme/reconnect/reconnect.o 00:01:54.162 CXX test/cpp_headers/fd.o 00:01:54.162 CXX test/cpp_headers/file.o 00:01:54.162 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:54.162 CC examples/vmd/led/led.o 00:01:54.162 CC test/app/stub/stub.o 00:01:54.162 CC test/event/reactor/reactor.o 00:01:54.162 CXX test/cpp_headers/gpt_spec.o 00:01:54.162 CXX test/cpp_headers/ftl.o 00:01:54.162 CC examples/nvme/arbitration/arbitration.o 00:01:54.162 CXX test/cpp_headers/hexlify.o 00:01:54.162 CXX test/cpp_headers/init.o 00:01:54.162 CXX test/cpp_headers/histogram_data.o 00:01:54.162 CXX test/cpp_headers/idxd_spec.o 00:01:54.162 CXX test/cpp_headers/idxd.o 00:01:54.162 CXX test/cpp_headers/ioat.o 00:01:54.162 CC test/app/jsoncat/jsoncat.o 00:01:54.162 CXX test/cpp_headers/iscsi_spec.o 00:01:54.162 CXX test/cpp_headers/ioat_spec.o 00:01:54.162 CXX test/cpp_headers/json.o 00:01:54.162 CXX test/cpp_headers/likely.o 00:01:54.162 CXX test/cpp_headers/jsonrpc.o 00:01:54.162 CC test/nvme/aer/aer.o 00:01:54.162 CXX test/cpp_headers/memory.o 00:01:54.162 CXX test/cpp_headers/log.o 00:01:54.162 CXX test/cpp_headers/lvol.o 00:01:54.162 CXX test/cpp_headers/notify.o 00:01:54.162 CC test/event/event_perf/event_perf.o 00:01:54.162 CXX test/cpp_headers/nvme.o 00:01:54.162 CXX test/cpp_headers/mmio.o 00:01:54.162 CXX test/cpp_headers/nbd.o 00:01:54.162 CXX test/cpp_headers/nvme_intel.o 00:01:54.162 CXX test/cpp_headers/nvme_spec.o 00:01:54.162 CXX test/cpp_headers/nvme_ocssd.o 00:01:54.162 CC examples/util/zipf/zipf.o 00:01:54.162 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:54.162 CC test/nvme/reset/reset.o 00:01:54.162 CC examples/idxd/perf/perf.o 00:01:54.162 CC test/nvme/err_injection/err_injection.o 00:01:54.162 CC test/nvme/fdp/fdp.o 00:01:54.162 CC examples/ioat/verify/verify.o 00:01:54.162 CC test/nvme/e2edp/nvme_dp.o 00:01:54.162 CC test/nvme/reserve/reserve.o 00:01:54.163 CC examples/nvme/hello_world/hello_world.o 00:01:54.163 CC examples/vmd/lsvmd/lsvmd.o 00:01:54.163 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:54.163 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:54.163 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:54.163 CC examples/nvme/hotplug/hotplug.o 00:01:54.163 CC test/app/histogram_perf/histogram_perf.o 00:01:54.163 CC test/event/reactor_perf/reactor_perf.o 00:01:54.163 LINK spdk_lspci 00:01:54.163 CC test/nvme/simple_copy/simple_copy.o 00:01:54.163 CC test/nvme/connect_stress/connect_stress.o 00:01:54.163 CC test/thread/poller_perf/poller_perf.o 00:01:54.163 CC test/nvme/overhead/overhead.o 00:01:54.163 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:54.163 CC examples/bdev/hello_world/hello_bdev.o 00:01:54.429 CC examples/sock/hello_world/hello_sock.o 00:01:54.429 CC examples/nvmf/nvmf/nvmf.o 00:01:54.429 CC test/nvme/startup/startup.o 00:01:54.429 CC test/bdev/bdevio/bdevio.o 00:01:54.429 CC test/nvme/sgl/sgl.o 00:01:54.429 CC test/nvme/cuse/cuse.o 00:01:54.429 CC test/app/bdev_svc/bdev_svc.o 00:01:54.429 CC examples/thread/thread/thread_ex.o 00:01:54.429 CC test/nvme/boot_partition/boot_partition.o 00:01:54.429 CC test/env/vtophys/vtophys.o 00:01:54.429 CC test/env/memory/memory_ut.o 00:01:54.429 CC app/fio/bdev/fio_plugin.o 00:01:54.429 CC test/blobfs/mkfs/mkfs.o 00:01:54.429 CC test/env/pci/pci_ut.o 00:01:54.429 CC examples/nvme/abort/abort.o 00:01:54.429 CC test/dma/test_dma/test_dma.o 00:01:54.430 CC test/event/app_repeat/app_repeat.o 00:01:54.430 CC test/nvme/fused_ordering/fused_ordering.o 00:01:54.430 CC test/nvme/compliance/nvme_compliance.o 00:01:54.430 CC examples/blob/cli/blobcli.o 00:01:54.430 CC examples/blob/hello_world/hello_blob.o 00:01:54.430 CC test/event/scheduler/scheduler.o 00:01:54.430 CC examples/bdev/bdevperf/bdevperf.o 00:01:54.430 CC test/accel/dif/dif.o 00:01:54.430 LINK interrupt_tgt 00:01:54.689 CC test/lvol/esnap/esnap.o 00:01:54.689 CC test/env/mem_callbacks/mem_callbacks.o 00:01:54.689 LINK vhost 00:01:54.689 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:54.689 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:54.689 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:54.689 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:54.689 LINK spdk_trace_record 00:01:54.689 LINK nvmf_tgt 00:01:54.689 LINK reactor 00:01:54.689 LINK spdk_nvme_discover 00:01:54.952 LINK led 00:01:54.952 LINK reactor_perf 00:01:54.952 LINK cmb_copy 00:01:54.952 LINK jsoncat 00:01:54.952 LINK iscsi_tgt 00:01:54.952 LINK spdk_tgt 00:01:54.952 LINK doorbell_aers 00:01:54.952 LINK event_perf 00:01:54.952 LINK vtophys 00:01:54.952 LINK connect_stress 00:01:54.952 LINK app_repeat 00:01:54.952 LINK fused_ordering 00:01:54.952 LINK lsvmd 00:01:54.952 LINK hello_world 00:01:54.952 LINK stub 00:01:54.952 LINK rpc_client_test 00:01:54.952 CXX test/cpp_headers/nvme_zns.o 00:01:54.952 LINK reserve 00:01:54.952 LINK zipf 00:01:54.952 LINK bdev_svc 00:01:54.952 LINK poller_perf 00:01:54.952 LINK hello_bdev 00:01:55.210 CXX test/cpp_headers/nvmf_cmd.o 00:01:55.210 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:55.210 LINK startup 00:01:55.210 CXX test/cpp_headers/nvmf.o 00:01:55.210 CXX test/cpp_headers/nvmf_spec.o 00:01:55.210 CXX test/cpp_headers/nvmf_transport.o 00:01:55.210 LINK spdk_trace 00:01:55.210 LINK thread 00:01:55.210 LINK nvme_dp 00:01:55.210 LINK env_dpdk_post_init 00:01:55.210 LINK err_injection 00:01:55.210 CXX test/cpp_headers/opal_spec.o 00:01:55.210 LINK arbitration 00:01:55.210 CXX test/cpp_headers/opal.o 00:01:55.210 CXX test/cpp_headers/pci_ids.o 00:01:55.210 CXX test/cpp_headers/pipe.o 00:01:55.210 LINK reset 00:01:55.210 CXX test/cpp_headers/queue.o 00:01:55.210 CXX test/cpp_headers/reduce.o 00:01:55.210 LINK histogram_perf 00:01:55.210 CXX test/cpp_headers/rpc.o 00:01:55.210 CXX test/cpp_headers/scheduler.o 00:01:55.210 CXX test/cpp_headers/scsi.o 00:01:55.210 LINK mkfs 00:01:55.210 CXX test/cpp_headers/scsi_spec.o 00:01:55.210 CXX test/cpp_headers/stdinc.o 00:01:55.210 CXX test/cpp_headers/string.o 00:01:55.210 CXX test/cpp_headers/thread.o 00:01:55.210 CXX test/cpp_headers/sock.o 00:01:55.210 CXX test/cpp_headers/trace.o 00:01:55.210 CXX test/cpp_headers/trace_parser.o 00:01:55.210 CXX test/cpp_headers/ublk.o 00:01:55.210 LINK pmr_persistence 00:01:55.210 CXX test/cpp_headers/tree.o 00:01:55.210 CXX test/cpp_headers/util.o 00:01:55.210 CXX test/cpp_headers/version.o 00:01:55.210 CXX test/cpp_headers/uuid.o 00:01:55.210 CXX test/cpp_headers/vfio_user_pci.o 00:01:55.210 LINK boot_partition 00:01:55.210 CXX test/cpp_headers/vhost.o 00:01:55.210 LINK verify 00:01:55.210 CXX test/cpp_headers/vfio_user_spec.o 00:01:55.210 CXX test/cpp_headers/vmd.o 00:01:55.210 CXX test/cpp_headers/xor.o 00:01:55.210 CXX test/cpp_headers/zipf.o 00:01:55.210 LINK nvmf 00:01:55.210 LINK idxd_perf 00:01:55.210 LINK reconnect 00:01:55.210 LINK fdp 00:01:55.210 LINK test_dma 00:01:55.210 LINK scheduler 00:01:55.210 LINK hotplug 00:01:55.467 LINK ioat_perf 00:01:55.467 LINK hello_blob 00:01:55.467 LINK abort 00:01:55.467 LINK aer 00:01:55.467 LINK hello_sock 00:01:55.467 LINK overhead 00:01:55.467 LINK simple_copy 00:01:55.467 LINK sgl 00:01:55.467 LINK spdk_dd 00:01:55.467 LINK blobcli 00:01:55.467 LINK dif 00:01:55.467 LINK nvme_fuzz 00:01:55.728 LINK bdevio 00:01:55.728 LINK mem_callbacks 00:01:55.728 LINK nvme_compliance 00:01:55.728 LINK pci_ut 00:01:55.728 LINK spdk_nvme 00:01:55.728 LINK accel_perf 00:01:55.728 LINK vhost_fuzz 00:01:55.728 LINK spdk_nvme_perf 00:01:55.728 LINK spdk_bdev 00:01:55.728 LINK nvme_manage 00:01:55.988 LINK memory_ut 00:01:55.988 LINK spdk_top 00:01:55.988 LINK spdk_nvme_identify 00:01:55.988 LINK bdevperf 00:01:55.988 LINK cuse 00:01:56.554 LINK iscsi_fuzz 00:01:58.456 LINK esnap 00:01:58.714 00:01:58.714 real 0m34.532s 00:01:58.714 user 5m36.313s 00:01:58.714 sys 4m42.026s 00:01:58.714 16:00:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:58.714 16:00:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.714 ************************************ 00:01:58.714 END TEST make 00:01:58.714 ************************************ 00:01:58.714 16:00:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:01:58.714 16:00:57 -- nvmf/common.sh@7 -- # uname -s 00:01:58.714 16:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:58.714 16:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:58.714 16:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:58.714 16:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:58.714 16:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:58.714 16:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:58.714 16:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:58.714 16:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:58.714 16:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:58.714 16:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:58.714 16:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:58.714 16:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:58.714 16:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:58.714 16:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:58.714 16:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:01:58.714 16:00:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:58.972 16:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:58.972 16:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.972 16:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.972 16:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.972 16:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.972 16:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.972 16:00:57 -- paths/export.sh@5 -- # export PATH 00:01:58.972 16:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.972 16:00:57 -- nvmf/common.sh@46 -- # : 0 00:01:58.973 16:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:01:58.973 16:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:01:58.973 16:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:01:58.973 16:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:58.973 16:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:58.973 16:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:01:58.973 16:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:01:58.973 16:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:01:58.973 16:00:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:58.973 16:00:57 -- spdk/autotest.sh@32 -- # uname -s 00:01:58.973 16:00:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:58.973 16:00:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:58.973 16:00:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:58.973 16:00:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:58.973 16:00:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:58.973 16:00:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:58.973 16:00:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:58.973 16:00:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:58.973 16:00:57 -- spdk/autotest.sh@48 -- # udevadm_pid=2812301 00:01:58.973 16:00:57 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:58.973 16:00:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:58.973 16:00:57 -- spdk/autotest.sh@54 -- # echo 2812303 00:01:58.973 16:00:57 -- spdk/autotest.sh@56 -- # echo 2812304 00:01:58.973 16:00:57 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:01:58.973 16:00:57 -- spdk/autotest.sh@60 -- # echo 2812305 00:01:58.973 16:00:57 -- spdk/autotest.sh@62 -- # echo 2812306 00:01:58.973 16:00:57 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:58.973 16:00:57 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.973 16:00:57 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:01:58.973 16:00:57 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:01:58.973 16:00:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:01:58.973 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:01:58.973 16:00:57 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:01:58.973 16:00:57 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l 00:01:58.973 16:00:57 -- spdk/autotest.sh@70 -- # create_test_list 00:01:58.973 16:00:57 -- common/autotest_common.sh@736 -- # xtrace_disable 00:01:58.973 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:01:58.973 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:01:58.973 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:01:58.973 16:00:57 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:01:58.973 16:00:57 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:58.973 16:00:57 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:58.973 16:00:57 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:58.973 16:00:57 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:58.973 16:00:57 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:01:58.973 16:00:57 -- common/autotest_common.sh@1440 -- # uname 00:01:58.973 16:00:57 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:01:58.973 16:00:57 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:01:58.973 16:00:57 -- common/autotest_common.sh@1460 -- # uname 00:01:58.973 16:00:57 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:01:58.973 16:00:57 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:01:58.973 16:00:57 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:01:58.973 16:00:57 -- spdk/autotest.sh@83 -- # hash lcov 00:01:58.973 16:00:57 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:58.973 16:00:57 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:01:58.973 --rc lcov_branch_coverage=1 00:01:58.973 --rc lcov_function_coverage=1 00:01:58.973 --rc genhtml_branch_coverage=1 00:01:58.973 --rc genhtml_function_coverage=1 00:01:58.973 --rc genhtml_legend=1 00:01:58.973 --rc geninfo_all_blocks=1 00:01:58.973 ' 00:01:58.973 16:00:57 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:01:58.973 --rc lcov_branch_coverage=1 00:01:58.973 --rc lcov_function_coverage=1 00:01:58.973 --rc genhtml_branch_coverage=1 00:01:58.973 --rc genhtml_function_coverage=1 00:01:58.973 --rc genhtml_legend=1 00:01:58.973 --rc geninfo_all_blocks=1 00:01:58.973 ' 00:01:58.973 16:00:57 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:01:58.973 --rc lcov_branch_coverage=1 00:01:58.973 --rc lcov_function_coverage=1 00:01:58.973 --rc genhtml_branch_coverage=1 00:01:58.973 --rc genhtml_function_coverage=1 00:01:58.973 --rc genhtml_legend=1 00:01:58.973 --rc geninfo_all_blocks=1 00:01:58.973 --no-external' 00:01:58.973 16:00:57 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:01:58.973 --rc lcov_branch_coverage=1 00:01:58.973 --rc lcov_function_coverage=1 00:01:58.973 --rc genhtml_branch_coverage=1 00:01:58.973 --rc genhtml_function_coverage=1 00:01:58.973 --rc genhtml_legend=1 00:01:58.973 --rc geninfo_all_blocks=1 00:01:58.973 --no-external' 00:01:58.973 16:00:57 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:58.973 lcov: LCOV version 1.14 00:01:58.973 16:00:57 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:02:03.182 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:03.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:03.182 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:03.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:03.182 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:03.182 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:13.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:13.178 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:13.179 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:13.179 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:14.123 16:01:12 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:14.123 16:01:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:14.123 16:01:12 -- common/autotest_common.sh@10 -- # set +x 00:02:14.123 16:01:12 -- spdk/autotest.sh@102 -- # rm -f 00:02:14.123 16:01:12 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:17.434 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:02:17.434 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:17.434 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:17.434 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:02:17.434 16:01:16 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:17.434 16:01:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:17.434 16:01:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:17.434 16:01:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:17.434 16:01:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:17.434 16:01:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:17.434 16:01:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:17.434 16:01:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:17.434 16:01:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:17.434 16:01:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:17.434 16:01:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:17.434 16:01:16 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:17.434 16:01:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:17.434 16:01:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:17.434 16:01:16 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:17.434 16:01:16 -- spdk/autotest.sh@121 -- # grep -v p 00:02:17.434 16:01:16 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 00:02:17.434 16:01:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:17.434 16:01:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:17.434 16:01:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:17.434 16:01:16 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:17.434 16:01:16 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:17.434 No valid GPT data, bailing 00:02:17.434 16:01:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:17.434 16:01:16 -- scripts/common.sh@393 -- # pt= 00:02:17.434 16:01:16 -- scripts/common.sh@394 -- # return 1 00:02:17.434 16:01:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:17.434 1+0 records in 00:02:17.434 1+0 records out 00:02:17.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00270024 s, 388 MB/s 00:02:17.434 16:01:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:17.434 16:01:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:17.434 16:01:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:02:17.434 16:01:16 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:17.434 16:01:16 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:17.434 No valid GPT data, bailing 00:02:17.434 16:01:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:17.434 16:01:16 -- scripts/common.sh@393 -- # pt= 00:02:17.434 16:01:16 -- scripts/common.sh@394 -- # return 1 00:02:17.434 16:01:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:17.434 1+0 records in 00:02:17.434 1+0 records out 00:02:17.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00225127 s, 466 MB/s 00:02:17.434 16:01:16 -- spdk/autotest.sh@129 -- # sync 00:02:17.434 16:01:16 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:17.434 16:01:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:17.434 16:01:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:21.685 16:01:20 -- spdk/autotest.sh@135 -- # uname -s 00:02:21.685 16:01:20 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:21.685 16:01:20 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:21.685 16:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:21.685 16:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:21.685 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:02:21.685 ************************************ 00:02:21.685 START TEST setup.sh 00:02:21.685 ************************************ 00:02:21.685 16:01:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:21.947 * Looking for test storage... 00:02:21.947 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:21.947 16:01:20 -- setup/test-setup.sh@10 -- # uname -s 00:02:21.947 16:01:20 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:21.947 16:01:20 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:21.947 16:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:21.947 16:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:21.947 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:02:21.947 ************************************ 00:02:21.947 START TEST acl 00:02:21.947 ************************************ 00:02:21.947 16:01:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:21.947 * Looking for test storage... 00:02:21.947 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:21.947 16:01:20 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:21.947 16:01:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:21.947 16:01:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:21.947 16:01:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:21.947 16:01:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:21.947 16:01:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:21.947 16:01:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:21.947 16:01:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.948 16:01:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:21.948 16:01:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:21.948 16:01:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:02:21.948 16:01:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:02:21.948 16:01:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:21.948 16:01:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:21.948 16:01:20 -- setup/acl.sh@12 -- # devs=() 00:02:21.948 16:01:20 -- setup/acl.sh@12 -- # declare -a devs 00:02:21.948 16:01:20 -- setup/acl.sh@13 -- # drivers=() 00:02:21.948 16:01:20 -- setup/acl.sh@13 -- # declare -A drivers 00:02:21.948 16:01:20 -- setup/acl.sh@51 -- # setup reset 00:02:21.948 16:01:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:21.948 16:01:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.356 16:01:23 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.357 16:01:23 -- setup/acl.sh@16 -- # local dev driver 00:02:25.357 16:01:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.357 16:01:23 -- setup/acl.sh@15 -- # setup output status 00:02:25.357 16:01:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.357 16:01:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:27.268 Hugepages 00:02:27.268 node hugesize free / total 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # continue 00:02:27.268 16:01:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # continue 00:02:27.268 16:01:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # continue 00:02:27.268 16:01:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 00:02:27.268 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.268 16:01:25 -- setup/acl.sh@19 -- # continue 00:02:27.268 16:01:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:03:00.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:27.268 16:01:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:27.268 16:01:26 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:27.268 16:01:26 -- setup/acl.sh@20 -- # continue 00:02:27.268 16:01:26 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.268 16:01:26 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:27.268 16:01:26 -- setup/acl.sh@54 -- # run_test denied denied 00:02:27.268 16:01:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:27.268 16:01:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:27.268 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:02:27.268 ************************************ 00:02:27.268 START TEST denied 00:02:27.268 ************************************ 00:02:27.268 16:01:26 -- common/autotest_common.sh@1104 -- # denied 00:02:27.268 16:01:26 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:03:00.0' 00:02:27.268 16:01:26 -- setup/acl.sh@38 -- # setup output config 00:02:27.268 16:01:26 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:03:00.0' 00:02:27.268 16:01:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.268 16:01:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:31.481 0000:03:00.0 (1344 51c3): Skipping denied controller at 0000:03:00.0 00:02:31.481 16:01:29 -- setup/acl.sh@40 -- # verify 0000:03:00.0 00:02:31.481 16:01:29 -- setup/acl.sh@28 -- # local dev driver 00:02:31.481 16:01:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:31.482 16:01:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:03:00.0 ]] 00:02:31.482 16:01:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:03:00.0/driver 00:02:31.482 16:01:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:31.482 16:01:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:31.482 16:01:29 -- setup/acl.sh@41 -- # setup reset 00:02:31.482 16:01:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.482 16:01:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.688 00:02:35.688 real 0m7.953s 00:02:35.688 user 0m2.000s 00:02:35.688 sys 0m3.870s 00:02:35.688 16:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.688 16:01:34 -- common/autotest_common.sh@10 -- # set +x 00:02:35.688 ************************************ 00:02:35.688 END TEST denied 00:02:35.688 ************************************ 00:02:35.688 16:01:34 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:35.688 16:01:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:35.688 16:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:35.688 16:01:34 -- common/autotest_common.sh@10 -- # set +x 00:02:35.688 ************************************ 00:02:35.688 START TEST allowed 00:02:35.688 ************************************ 00:02:35.688 16:01:34 -- common/autotest_common.sh@1104 -- # allowed 00:02:35.688 16:01:34 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:03:00.0 00:02:35.688 16:01:34 -- setup/acl.sh@45 -- # setup output config 00:02:35.688 16:01:34 -- setup/acl.sh@46 -- # grep -E '0000:03:00.0 .*: nvme -> .*' 00:02:35.688 16:01:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.688 16:01:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:39.896 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:39.896 16:01:37 -- setup/acl.sh@47 -- # verify 0000:c9:00.0 00:02:39.896 16:01:37 -- setup/acl.sh@28 -- # local dev driver 00:02:39.896 16:01:37 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:39.896 16:01:37 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:39.896 16:01:37 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:39.896 16:01:37 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:39.896 16:01:37 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:39.896 16:01:37 -- setup/acl.sh@48 -- # setup reset 00:02:39.896 16:01:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.896 16:01:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.443 00:02:42.443 real 0m6.916s 00:02:42.443 user 0m1.881s 00:02:42.443 sys 0m3.867s 00:02:42.443 16:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.443 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:02:42.443 ************************************ 00:02:42.443 END TEST allowed 00:02:42.443 ************************************ 00:02:42.443 00:02:42.443 real 0m20.449s 00:02:42.443 user 0m5.784s 00:02:42.443 sys 0m11.226s 00:02:42.443 16:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.443 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:02:42.443 ************************************ 00:02:42.443 END TEST acl 00:02:42.443 ************************************ 00:02:42.443 16:01:41 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:42.443 16:01:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:42.443 16:01:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:42.443 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:02:42.443 ************************************ 00:02:42.443 START TEST hugepages 00:02:42.443 ************************************ 00:02:42.443 16:01:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:42.443 * Looking for test storage... 00:02:42.443 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:42.443 16:01:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:42.443 16:01:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:42.443 16:01:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:42.444 16:01:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:42.444 16:01:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:42.444 16:01:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:42.444 16:01:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:42.444 16:01:41 -- setup/common.sh@18 -- # local node= 00:02:42.444 16:01:41 -- setup/common.sh@19 -- # local var val 00:02:42.444 16:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.444 16:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.444 16:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.444 16:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.444 16:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.444 16:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 104263336 kB' 'MemAvailable: 108975992 kB' 'Buffers: 2780 kB' 'Cached: 13389748 kB' 'SwapCached: 0 kB' 'Active: 9428748 kB' 'Inactive: 4601696 kB' 'Active(anon): 8857388 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647256 kB' 'Mapped: 207668 kB' 'Shmem: 8219472 kB' 'KReclaimable: 580944 kB' 'Slab: 1298048 kB' 'SReclaimable: 580944 kB' 'SUnreclaim: 717104 kB' 'KernelStack: 25280 kB' 'PageTables: 10096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69510444 kB' 'Committed_AS: 10505904 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230712 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.444 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.444 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # continue 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.445 16:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.445 16:01:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.445 16:01:41 -- setup/common.sh@33 -- # echo 2048 00:02:42.445 16:01:41 -- setup/common.sh@33 -- # return 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:42.445 16:01:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:42.445 16:01:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:42.445 16:01:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:42.445 16:01:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:42.445 16:01:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:42.445 16:01:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:42.445 16:01:41 -- setup/hugepages.sh@207 -- # get_nodes 00:02:42.445 16:01:41 -- setup/hugepages.sh@27 -- # local node 00:02:42.445 16:01:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.445 16:01:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:42.445 16:01:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.445 16:01:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.445 16:01:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.445 16:01:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.445 16:01:41 -- setup/hugepages.sh@208 -- # clear_hp 00:02:42.445 16:01:41 -- setup/hugepages.sh@37 -- # local node hp 00:02:42.445 16:01:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.445 16:01:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.445 16:01:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.445 16:01:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.445 16:01:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.445 16:01:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.445 16:01:41 -- setup/hugepages.sh@41 -- # echo 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:42.445 16:01:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:42.445 16:01:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:42.445 16:01:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:42.445 16:01:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:42.445 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:02:42.445 ************************************ 00:02:42.445 START TEST default_setup 00:02:42.445 ************************************ 00:02:42.445 16:01:41 -- common/autotest_common.sh@1104 -- # default_setup 00:02:42.445 16:01:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.445 16:01:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:42.445 16:01:41 -- setup/hugepages.sh@51 -- # shift 00:02:42.445 16:01:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:42.445 16:01:41 -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.445 16:01:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.445 16:01:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.445 16:01:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:42.445 16:01:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.445 16:01:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.445 16:01:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.445 16:01:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.445 16:01:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.445 16:01:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:42.445 16:01:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.445 16:01:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:42.445 16:01:41 -- setup/hugepages.sh@73 -- # return 0 00:02:42.445 16:01:41 -- setup/hugepages.sh@137 -- # setup output 00:02:42.445 16:01:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.445 16:01:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:44.990 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:44.990 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.251 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.251 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.251 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.251 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.251 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.251 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.251 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.251 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.513 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.513 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.514 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.514 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:45.514 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:45.514 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:46.086 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:02:46.347 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:46.347 16:01:45 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:46.347 16:01:45 -- setup/hugepages.sh@89 -- # local node 00:02:46.347 16:01:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.347 16:01:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.347 16:01:45 -- setup/hugepages.sh@92 -- # local surp 00:02:46.347 16:01:45 -- setup/hugepages.sh@93 -- # local resv 00:02:46.348 16:01:45 -- setup/hugepages.sh@94 -- # local anon 00:02:46.348 16:01:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.348 16:01:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.611 16:01:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.611 16:01:45 -- setup/common.sh@18 -- # local node= 00:02:46.611 16:01:45 -- setup/common.sh@19 -- # local var val 00:02:46.611 16:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.611 16:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.611 16:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.611 16:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.611 16:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.611 16:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.611 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.611 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106544552 kB' 'MemAvailable: 111256728 kB' 'Buffers: 2780 kB' 'Cached: 13390000 kB' 'SwapCached: 0 kB' 'Active: 9456728 kB' 'Inactive: 4601696 kB' 'Active(anon): 8885368 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674996 kB' 'Mapped: 208096 kB' 'Shmem: 8219724 kB' 'KReclaimable: 580464 kB' 'Slab: 1291828 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 711364 kB' 'KernelStack: 25376 kB' 'PageTables: 11784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10590096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230744 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.612 16:01:45 -- setup/common.sh@33 -- # echo 0 00:02:46.612 16:01:45 -- setup/common.sh@33 -- # return 0 00:02:46.612 16:01:45 -- setup/hugepages.sh@97 -- # anon=0 00:02:46.612 16:01:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.612 16:01:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.612 16:01:45 -- setup/common.sh@18 -- # local node= 00:02:46.612 16:01:45 -- setup/common.sh@19 -- # local var val 00:02:46.612 16:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.612 16:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.612 16:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.612 16:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.612 16:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.612 16:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106546420 kB' 'MemAvailable: 111258596 kB' 'Buffers: 2780 kB' 'Cached: 13390000 kB' 'SwapCached: 0 kB' 'Active: 9457140 kB' 'Inactive: 4601696 kB' 'Active(anon): 8885780 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675416 kB' 'Mapped: 208096 kB' 'Shmem: 8219724 kB' 'KReclaimable: 580464 kB' 'Slab: 1291804 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 711340 kB' 'KernelStack: 25376 kB' 'PageTables: 11912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10590108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230616 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.612 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.612 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 16:01:45 -- setup/common.sh@33 -- # echo 0 00:02:46.613 16:01:45 -- setup/common.sh@33 -- # return 0 00:02:46.613 16:01:45 -- setup/hugepages.sh@99 -- # surp=0 00:02:46.613 16:01:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.613 16:01:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.613 16:01:45 -- setup/common.sh@18 -- # local node= 00:02:46.613 16:01:45 -- setup/common.sh@19 -- # local var val 00:02:46.613 16:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.613 16:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.613 16:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.613 16:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.613 16:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.613 16:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106559316 kB' 'MemAvailable: 111271492 kB' 'Buffers: 2780 kB' 'Cached: 13390012 kB' 'SwapCached: 0 kB' 'Active: 9456652 kB' 'Inactive: 4601696 kB' 'Active(anon): 8885292 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674932 kB' 'Mapped: 208076 kB' 'Shmem: 8219736 kB' 'KReclaimable: 580464 kB' 'Slab: 1291740 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 711276 kB' 'KernelStack: 25360 kB' 'PageTables: 11660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10589872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230680 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.614 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.614 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.614 16:01:45 -- setup/common.sh@33 -- # echo 0 00:02:46.614 16:01:45 -- setup/common.sh@33 -- # return 0 00:02:46.614 16:01:45 -- setup/hugepages.sh@100 -- # resv=0 00:02:46.614 16:01:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.614 nr_hugepages=1024 00:02:46.614 16:01:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.614 resv_hugepages=0 00:02:46.614 16:01:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.614 surplus_hugepages=0 00:02:46.614 16:01:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.614 anon_hugepages=0 00:02:46.615 16:01:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.615 16:01:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.615 16:01:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.615 16:01:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.615 16:01:45 -- setup/common.sh@18 -- # local node= 00:02:46.615 16:01:45 -- setup/common.sh@19 -- # local var val 00:02:46.615 16:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.615 16:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.615 16:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.615 16:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.615 16:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.615 16:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106558932 kB' 'MemAvailable: 111271108 kB' 'Buffers: 2780 kB' 'Cached: 13390028 kB' 'SwapCached: 0 kB' 'Active: 9456520 kB' 'Inactive: 4601696 kB' 'Active(anon): 8885160 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674752 kB' 'Mapped: 208076 kB' 'Shmem: 8219752 kB' 'KReclaimable: 580464 kB' 'Slab: 1291512 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 711048 kB' 'KernelStack: 25200 kB' 'PageTables: 11388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10590136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230680 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.615 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.615 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.616 16:01:45 -- setup/common.sh@33 -- # echo 1024 00:02:46.616 16:01:45 -- setup/common.sh@33 -- # return 0 00:02:46.616 16:01:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.616 16:01:45 -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.616 16:01:45 -- setup/hugepages.sh@27 -- # local node 00:02:46.616 16:01:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.616 16:01:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.616 16:01:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.616 16:01:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.616 16:01:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.616 16:01:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.616 16:01:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.616 16:01:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.616 16:01:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.616 16:01:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.616 16:01:45 -- setup/common.sh@18 -- # local node=0 00:02:46.616 16:01:45 -- setup/common.sh@19 -- # local var val 00:02:46.616 16:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.616 16:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.616 16:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.616 16:01:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.616 16:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.616 16:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51875768 kB' 'MemUsed: 13880212 kB' 'SwapCached: 0 kB' 'Active: 6593072 kB' 'Inactive: 3451672 kB' 'Active(anon): 6184896 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593732 kB' 'Mapped: 129324 kB' 'AnonPages: 460208 kB' 'Shmem: 5733884 kB' 'KernelStack: 14056 kB' 'PageTables: 7168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 666780 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 401620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.616 16:01:45 -- setup/common.sh@32 -- # continue 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.616 16:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.617 16:01:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.617 16:01:45 -- setup/common.sh@33 -- # echo 0 00:02:46.617 16:01:45 -- setup/common.sh@33 -- # return 0 00:02:46.617 16:01:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.617 16:01:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.617 16:01:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.617 16:01:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.617 16:01:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:46.617 node0=1024 expecting 1024 00:02:46.617 16:01:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:46.617 00:02:46.617 real 0m4.123s 00:02:46.617 user 0m1.057s 00:02:46.617 sys 0m1.762s 00:02:46.617 16:01:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.617 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:02:46.617 ************************************ 00:02:46.617 END TEST default_setup 00:02:46.617 ************************************ 00:02:46.617 16:01:45 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:46.617 16:01:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:46.617 16:01:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:46.617 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:02:46.617 ************************************ 00:02:46.617 START TEST per_node_1G_alloc 00:02:46.617 ************************************ 00:02:46.617 16:01:45 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:46.617 16:01:45 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:46.617 16:01:45 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:46.617 16:01:45 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:46.617 16:01:45 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:46.617 16:01:45 -- setup/hugepages.sh@51 -- # shift 00:02:46.617 16:01:45 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:46.617 16:01:45 -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.617 16:01:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.617 16:01:45 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:46.617 16:01:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:46.617 16:01:45 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:46.617 16:01:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.617 16:01:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:46.617 16:01:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.617 16:01:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.617 16:01:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.617 16:01:45 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:46.617 16:01:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.617 16:01:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:46.617 16:01:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.617 16:01:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:46.617 16:01:45 -- setup/hugepages.sh@73 -- # return 0 00:02:46.617 16:01:45 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:46.617 16:01:45 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:46.617 16:01:45 -- setup/hugepages.sh@146 -- # setup output 00:02:46.617 16:01:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.617 16:01:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:49.162 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:49.162 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.162 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.162 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:49.429 16:01:48 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:49.429 16:01:48 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:49.429 16:01:48 -- setup/hugepages.sh@89 -- # local node 00:02:49.429 16:01:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.429 16:01:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.429 16:01:48 -- setup/hugepages.sh@92 -- # local surp 00:02:49.429 16:01:48 -- setup/hugepages.sh@93 -- # local resv 00:02:49.429 16:01:48 -- setup/hugepages.sh@94 -- # local anon 00:02:49.429 16:01:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.429 16:01:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.429 16:01:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.429 16:01:48 -- setup/common.sh@18 -- # local node= 00:02:49.429 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.429 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.429 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.429 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.429 16:01:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.429 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.429 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106555992 kB' 'MemAvailable: 111268168 kB' 'Buffers: 2780 kB' 'Cached: 13390116 kB' 'SwapCached: 0 kB' 'Active: 9456908 kB' 'Inactive: 4601696 kB' 'Active(anon): 8885548 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674892 kB' 'Mapped: 208096 kB' 'Shmem: 8219840 kB' 'KReclaimable: 580464 kB' 'Slab: 1291244 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710780 kB' 'KernelStack: 25024 kB' 'PageTables: 11020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10588868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230568 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.429 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.429 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.430 16:01:48 -- setup/common.sh@33 -- # echo 0 00:02:49.430 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.430 16:01:48 -- setup/hugepages.sh@97 -- # anon=0 00:02:49.430 16:01:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.430 16:01:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.430 16:01:48 -- setup/common.sh@18 -- # local node= 00:02:49.430 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.430 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.430 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.430 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.430 16:01:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.430 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.430 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106555488 kB' 'MemAvailable: 111267664 kB' 'Buffers: 2780 kB' 'Cached: 13390120 kB' 'SwapCached: 0 kB' 'Active: 9458212 kB' 'Inactive: 4601696 kB' 'Active(anon): 8886852 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 676248 kB' 'Mapped: 208172 kB' 'Shmem: 8219844 kB' 'KReclaimable: 580464 kB' 'Slab: 1291216 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710752 kB' 'KernelStack: 25104 kB' 'PageTables: 11252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10585508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230568 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.430 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.430 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.431 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.431 16:01:48 -- setup/common.sh@33 -- # echo 0 00:02:49.431 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.431 16:01:48 -- setup/hugepages.sh@99 -- # surp=0 00:02:49.431 16:01:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.431 16:01:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.431 16:01:48 -- setup/common.sh@18 -- # local node= 00:02:49.431 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.431 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.431 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.431 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.431 16:01:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.431 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.431 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.431 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106556276 kB' 'MemAvailable: 111268452 kB' 'Buffers: 2780 kB' 'Cached: 13390132 kB' 'SwapCached: 0 kB' 'Active: 9457792 kB' 'Inactive: 4601696 kB' 'Active(anon): 8886432 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675848 kB' 'Mapped: 208172 kB' 'Shmem: 8219856 kB' 'KReclaimable: 580464 kB' 'Slab: 1291216 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710752 kB' 'KernelStack: 25088 kB' 'PageTables: 11224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10588900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230568 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.432 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.432 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.433 16:01:48 -- setup/common.sh@33 -- # echo 0 00:02:49.433 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.433 16:01:48 -- setup/hugepages.sh@100 -- # resv=0 00:02:49.433 16:01:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.433 nr_hugepages=1024 00:02:49.433 16:01:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.433 resv_hugepages=0 00:02:49.433 16:01:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.433 surplus_hugepages=0 00:02:49.433 16:01:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.433 anon_hugepages=0 00:02:49.433 16:01:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.433 16:01:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.433 16:01:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.433 16:01:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.433 16:01:48 -- setup/common.sh@18 -- # local node= 00:02:49.433 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.433 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.433 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.433 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.433 16:01:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.433 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.433 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106557736 kB' 'MemAvailable: 111269912 kB' 'Buffers: 2780 kB' 'Cached: 13390152 kB' 'SwapCached: 0 kB' 'Active: 9457716 kB' 'Inactive: 4601696 kB' 'Active(anon): 8886356 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675720 kB' 'Mapped: 208088 kB' 'Shmem: 8219876 kB' 'KReclaimable: 580464 kB' 'Slab: 1291180 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710716 kB' 'KernelStack: 25088 kB' 'PageTables: 11152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10588920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230488 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.433 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.433 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.434 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.434 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.434 16:01:48 -- setup/common.sh@33 -- # echo 1024 00:02:49.434 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.434 16:01:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.434 16:01:48 -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.434 16:01:48 -- setup/hugepages.sh@27 -- # local node 00:02:49.434 16:01:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.434 16:01:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.434 16:01:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.434 16:01:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.434 16:01:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.434 16:01:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.434 16:01:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.434 16:01:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.434 16:01:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.434 16:01:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.434 16:01:48 -- setup/common.sh@18 -- # local node=0 00:02:49.434 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.434 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.434 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.434 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.434 16:01:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.434 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.435 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52942252 kB' 'MemUsed: 12813728 kB' 'SwapCached: 0 kB' 'Active: 6593952 kB' 'Inactive: 3451672 kB' 'Active(anon): 6185776 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593784 kB' 'Mapped: 129336 kB' 'AnonPages: 460976 kB' 'Shmem: 5733936 kB' 'KernelStack: 14088 kB' 'PageTables: 7252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 666660 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 401500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.435 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.435 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.435 16:01:48 -- setup/common.sh@33 -- # echo 0 00:02:49.435 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.435 16:01:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.436 16:01:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.436 16:01:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.436 16:01:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.436 16:01:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.436 16:01:48 -- setup/common.sh@18 -- # local node=1 00:02:49.436 16:01:48 -- setup/common.sh@19 -- # local var val 00:02:49.436 16:01:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.436 16:01:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.436 16:01:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.436 16:01:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.436 16:01:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.436 16:01:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 53615484 kB' 'MemUsed: 7066524 kB' 'SwapCached: 0 kB' 'Active: 2864152 kB' 'Inactive: 1150024 kB' 'Active(anon): 2700968 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1150024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3799176 kB' 'Mapped: 78752 kB' 'AnonPages: 215168 kB' 'Shmem: 2485968 kB' 'KernelStack: 10952 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315304 kB' 'Slab: 624520 kB' 'SReclaimable: 315304 kB' 'SUnreclaim: 309216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.436 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.436 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.437 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.437 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.437 16:01:48 -- setup/common.sh@32 -- # continue 00:02:49.437 16:01:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.437 16:01:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.437 16:01:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.437 16:01:48 -- setup/common.sh@33 -- # echo 0 00:02:49.437 16:01:48 -- setup/common.sh@33 -- # return 0 00:02:49.437 16:01:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.437 16:01:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.437 16:01:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.437 16:01:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.437 node0=512 expecting 512 00:02:49.437 16:01:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.437 16:01:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.437 16:01:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.437 16:01:48 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.437 node1=512 expecting 512 00:02:49.437 16:01:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.437 00:02:49.437 real 0m2.824s 00:02:49.437 user 0m0.908s 00:02:49.437 sys 0m1.672s 00:02:49.437 16:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.437 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:02:49.437 ************************************ 00:02:49.437 END TEST per_node_1G_alloc 00:02:49.437 ************************************ 00:02:49.437 16:01:48 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:49.437 16:01:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:49.437 16:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:49.437 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:02:49.437 ************************************ 00:02:49.437 START TEST even_2G_alloc 00:02:49.437 ************************************ 00:02:49.437 16:01:48 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:02:49.437 16:01:48 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:49.437 16:01:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:49.437 16:01:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:49.437 16:01:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:49.437 16:01:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:49.437 16:01:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.437 16:01:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:49.437 16:01:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.437 16:01:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.437 16:01:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.437 16:01:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.437 16:01:48 -- setup/hugepages.sh@83 -- # : 512 00:02:49.437 16:01:48 -- setup/hugepages.sh@84 -- # : 1 00:02:49.437 16:01:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.437 16:01:48 -- setup/hugepages.sh@83 -- # : 0 00:02:49.437 16:01:48 -- setup/hugepages.sh@84 -- # : 0 00:02:49.437 16:01:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.437 16:01:48 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:49.437 16:01:48 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:49.437 16:01:48 -- setup/hugepages.sh@153 -- # setup output 00:02:49.437 16:01:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.437 16:01:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:52.746 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:52.746 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.746 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.746 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:52.746 16:01:51 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:52.746 16:01:51 -- setup/hugepages.sh@89 -- # local node 00:02:52.746 16:01:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.746 16:01:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.746 16:01:51 -- setup/hugepages.sh@92 -- # local surp 00:02:52.746 16:01:51 -- setup/hugepages.sh@93 -- # local resv 00:02:52.746 16:01:51 -- setup/hugepages.sh@94 -- # local anon 00:02:52.746 16:01:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.746 16:01:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.746 16:01:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.746 16:01:51 -- setup/common.sh@18 -- # local node= 00:02:52.746 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.746 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.746 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.746 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.746 16:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.746 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.746 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106602784 kB' 'MemAvailable: 111314960 kB' 'Buffers: 2780 kB' 'Cached: 13390248 kB' 'SwapCached: 0 kB' 'Active: 9446676 kB' 'Inactive: 4601696 kB' 'Active(anon): 8875316 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663720 kB' 'Mapped: 206820 kB' 'Shmem: 8219972 kB' 'KReclaimable: 580464 kB' 'Slab: 1289192 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 708728 kB' 'KernelStack: 25200 kB' 'PageTables: 10604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10516740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.746 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.746 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.747 16:01:51 -- setup/common.sh@33 -- # echo 0 00:02:52.747 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.747 16:01:51 -- setup/hugepages.sh@97 -- # anon=0 00:02:52.747 16:01:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.747 16:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.747 16:01:51 -- setup/common.sh@18 -- # local node= 00:02:52.747 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.747 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.747 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.747 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.747 16:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.747 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.747 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106601164 kB' 'MemAvailable: 111313340 kB' 'Buffers: 2780 kB' 'Cached: 13390252 kB' 'SwapCached: 0 kB' 'Active: 9446300 kB' 'Inactive: 4601696 kB' 'Active(anon): 8874940 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663836 kB' 'Mapped: 206796 kB' 'Shmem: 8219976 kB' 'KReclaimable: 580464 kB' 'Slab: 1288992 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 708528 kB' 'KernelStack: 25072 kB' 'PageTables: 10372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10516748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230552 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.747 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.747 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.748 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.748 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.749 16:01:51 -- setup/common.sh@33 -- # echo 0 00:02:52.749 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.749 16:01:51 -- setup/hugepages.sh@99 -- # surp=0 00:02:52.749 16:01:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.749 16:01:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.749 16:01:51 -- setup/common.sh@18 -- # local node= 00:02:52.749 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.749 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.749 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.749 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.749 16:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.749 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.749 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106601532 kB' 'MemAvailable: 111313708 kB' 'Buffers: 2780 kB' 'Cached: 13390264 kB' 'SwapCached: 0 kB' 'Active: 9445624 kB' 'Inactive: 4601696 kB' 'Active(anon): 8874264 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663544 kB' 'Mapped: 206720 kB' 'Shmem: 8219988 kB' 'KReclaimable: 580464 kB' 'Slab: 1288796 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 708332 kB' 'KernelStack: 25024 kB' 'PageTables: 10028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10515260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230536 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.749 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.749 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.750 16:01:51 -- setup/common.sh@33 -- # echo 0 00:02:52.750 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.750 16:01:51 -- setup/hugepages.sh@100 -- # resv=0 00:02:52.750 16:01:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.750 nr_hugepages=1024 00:02:52.750 16:01:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.750 resv_hugepages=0 00:02:52.750 16:01:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.750 surplus_hugepages=0 00:02:52.750 16:01:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.750 anon_hugepages=0 00:02:52.750 16:01:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.750 16:01:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.750 16:01:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.750 16:01:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.750 16:01:51 -- setup/common.sh@18 -- # local node= 00:02:52.750 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.750 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.750 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.750 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.750 16:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.750 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.750 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106601432 kB' 'MemAvailable: 111313608 kB' 'Buffers: 2780 kB' 'Cached: 13390264 kB' 'SwapCached: 0 kB' 'Active: 9445204 kB' 'Inactive: 4601696 kB' 'Active(anon): 8873844 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663124 kB' 'Mapped: 206720 kB' 'Shmem: 8219988 kB' 'KReclaimable: 580464 kB' 'Slab: 1288792 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 708328 kB' 'KernelStack: 24992 kB' 'PageTables: 10048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10516780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230568 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.750 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.750 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.751 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.751 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.752 16:01:51 -- setup/common.sh@33 -- # echo 1024 00:02:52.752 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.752 16:01:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.752 16:01:51 -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.752 16:01:51 -- setup/hugepages.sh@27 -- # local node 00:02:52.752 16:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.752 16:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.752 16:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.752 16:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.752 16:01:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.752 16:01:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.752 16:01:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.752 16:01:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.752 16:01:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.752 16:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.752 16:01:51 -- setup/common.sh@18 -- # local node=0 00:02:52.752 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.752 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.752 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.752 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.752 16:01:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.752 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.752 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52995224 kB' 'MemUsed: 12760756 kB' 'SwapCached: 0 kB' 'Active: 6585200 kB' 'Inactive: 3451672 kB' 'Active(anon): 6177024 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593824 kB' 'Mapped: 128064 kB' 'AnonPages: 452140 kB' 'Shmem: 5733976 kB' 'KernelStack: 14136 kB' 'PageTables: 6324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 665240 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 400080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.752 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.752 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@33 -- # echo 0 00:02:52.753 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.753 16:01:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.753 16:01:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.753 16:01:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.753 16:01:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.753 16:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.753 16:01:51 -- setup/common.sh@18 -- # local node=1 00:02:52.753 16:01:51 -- setup/common.sh@19 -- # local var val 00:02:52.753 16:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.753 16:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.753 16:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.753 16:01:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.753 16:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.753 16:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 53602768 kB' 'MemUsed: 7079240 kB' 'SwapCached: 0 kB' 'Active: 2860496 kB' 'Inactive: 1150024 kB' 'Active(anon): 2697312 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1150024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3799260 kB' 'Mapped: 78656 kB' 'AnonPages: 211392 kB' 'Shmem: 2486052 kB' 'KernelStack: 10952 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315304 kB' 'Slab: 623552 kB' 'SReclaimable: 315304 kB' 'SUnreclaim: 308248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.753 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.753 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # continue 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.754 16:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.754 16:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.754 16:01:51 -- setup/common.sh@33 -- # echo 0 00:02:52.754 16:01:51 -- setup/common.sh@33 -- # return 0 00:02:52.754 16:01:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.754 16:01:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.754 16:01:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.754 16:01:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.754 node0=512 expecting 512 00:02:52.754 16:01:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.754 16:01:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.754 16:01:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.754 16:01:51 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:52.754 node1=512 expecting 512 00:02:52.754 16:01:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:52.754 00:02:52.754 real 0m3.164s 00:02:52.754 user 0m1.028s 00:02:52.754 sys 0m1.929s 00:02:52.754 16:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.754 16:01:51 -- common/autotest_common.sh@10 -- # set +x 00:02:52.754 ************************************ 00:02:52.754 END TEST even_2G_alloc 00:02:52.754 ************************************ 00:02:52.754 16:01:51 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:52.754 16:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:52.754 16:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:52.754 16:01:51 -- common/autotest_common.sh@10 -- # set +x 00:02:52.754 ************************************ 00:02:52.754 START TEST odd_alloc 00:02:52.754 ************************************ 00:02:52.754 16:01:51 -- common/autotest_common.sh@1104 -- # odd_alloc 00:02:52.754 16:01:51 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:52.754 16:01:51 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:52.754 16:01:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:52.754 16:01:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.754 16:01:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.754 16:01:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.754 16:01:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:52.754 16:01:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.754 16:01:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.754 16:01:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.754 16:01:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:52.754 16:01:51 -- setup/hugepages.sh@83 -- # : 513 00:02:52.754 16:01:51 -- setup/hugepages.sh@84 -- # : 1 00:02:52.754 16:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:52.754 16:01:51 -- setup/hugepages.sh@83 -- # : 0 00:02:52.754 16:01:51 -- setup/hugepages.sh@84 -- # : 0 00:02:52.754 16:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.754 16:01:51 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:52.754 16:01:51 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:52.754 16:01:51 -- setup/hugepages.sh@160 -- # setup output 00:02:52.754 16:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.754 16:01:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:54.824 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:54.824 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.824 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.824 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:54.824 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:54.824 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.085 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.085 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.085 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.085 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.086 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.086 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.086 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:55.086 16:01:54 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:55.086 16:01:54 -- setup/hugepages.sh@89 -- # local node 00:02:55.086 16:01:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.086 16:01:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.086 16:01:54 -- setup/hugepages.sh@92 -- # local surp 00:02:55.086 16:01:54 -- setup/hugepages.sh@93 -- # local resv 00:02:55.086 16:01:54 -- setup/hugepages.sh@94 -- # local anon 00:02:55.086 16:01:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.086 16:01:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.351 16:01:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.351 16:01:54 -- setup/common.sh@18 -- # local node= 00:02:55.351 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.351 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.351 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.351 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.351 16:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.351 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.351 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106579332 kB' 'MemAvailable: 111291508 kB' 'Buffers: 2780 kB' 'Cached: 13390384 kB' 'SwapCached: 0 kB' 'Active: 9446184 kB' 'Inactive: 4601696 kB' 'Active(anon): 8874824 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663480 kB' 'Mapped: 206812 kB' 'Shmem: 8220108 kB' 'KReclaimable: 580464 kB' 'Slab: 1289820 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709356 kB' 'KernelStack: 24880 kB' 'PageTables: 9792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10512888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230632 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.351 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.351 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.352 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.352 16:01:54 -- setup/common.sh@33 -- # echo 0 00:02:55.352 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.352 16:01:54 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.352 16:01:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.352 16:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.352 16:01:54 -- setup/common.sh@18 -- # local node= 00:02:55.352 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.352 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.352 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.352 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.352 16:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.352 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.352 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.352 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106583904 kB' 'MemAvailable: 111296080 kB' 'Buffers: 2780 kB' 'Cached: 13390384 kB' 'SwapCached: 0 kB' 'Active: 9446004 kB' 'Inactive: 4601696 kB' 'Active(anon): 8874644 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663360 kB' 'Mapped: 206808 kB' 'Shmem: 8220108 kB' 'KReclaimable: 580464 kB' 'Slab: 1289796 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709332 kB' 'KernelStack: 24880 kB' 'PageTables: 9792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10512900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230584 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 16:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.354 16:01:54 -- setup/common.sh@33 -- # echo 0 00:02:55.354 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.354 16:01:54 -- setup/hugepages.sh@99 -- # surp=0 00:02:55.354 16:01:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.354 16:01:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.354 16:01:54 -- setup/common.sh@18 -- # local node= 00:02:55.354 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.354 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.354 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.354 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.354 16:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.354 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.354 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106584532 kB' 'MemAvailable: 111296708 kB' 'Buffers: 2780 kB' 'Cached: 13390384 kB' 'SwapCached: 0 kB' 'Active: 9445328 kB' 'Inactive: 4601696 kB' 'Active(anon): 8873968 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663120 kB' 'Mapped: 206728 kB' 'Shmem: 8220108 kB' 'KReclaimable: 580464 kB' 'Slab: 1289792 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709328 kB' 'KernelStack: 24896 kB' 'PageTables: 9720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10512912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.354 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.355 16:01:54 -- setup/common.sh@33 -- # echo 0 00:02:55.355 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.355 16:01:54 -- setup/hugepages.sh@100 -- # resv=0 00:02:55.355 16:01:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:55.355 nr_hugepages=1025 00:02:55.355 16:01:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.355 resv_hugepages=0 00:02:55.355 16:01:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.355 surplus_hugepages=0 00:02:55.355 16:01:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.355 anon_hugepages=0 00:02:55.355 16:01:54 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:55.355 16:01:54 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:55.355 16:01:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.355 16:01:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.355 16:01:54 -- setup/common.sh@18 -- # local node= 00:02:55.355 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.355 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.355 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.355 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.355 16:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.355 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.355 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106584660 kB' 'MemAvailable: 111296836 kB' 'Buffers: 2780 kB' 'Cached: 13390412 kB' 'SwapCached: 0 kB' 'Active: 9445280 kB' 'Inactive: 4601696 kB' 'Active(anon): 8873920 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663048 kB' 'Mapped: 206728 kB' 'Shmem: 8220136 kB' 'KReclaimable: 580464 kB' 'Slab: 1289792 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709328 kB' 'KernelStack: 24864 kB' 'PageTables: 9624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10512928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230584 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 16:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.357 16:01:54 -- setup/common.sh@33 -- # echo 1025 00:02:55.357 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.357 16:01:54 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:55.357 16:01:54 -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.357 16:01:54 -- setup/hugepages.sh@27 -- # local node 00:02:55.357 16:01:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.357 16:01:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:55.357 16:01:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.357 16:01:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:55.357 16:01:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.357 16:01:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.357 16:01:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.357 16:01:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.357 16:01:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.357 16:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.357 16:01:54 -- setup/common.sh@18 -- # local node=0 00:02:55.357 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.357 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.357 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.357 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.357 16:01:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.357 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.357 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52991768 kB' 'MemUsed: 12764212 kB' 'SwapCached: 0 kB' 'Active: 6585292 kB' 'Inactive: 3451672 kB' 'Active(anon): 6177116 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593872 kB' 'Mapped: 128076 kB' 'AnonPages: 452176 kB' 'Shmem: 5734024 kB' 'KernelStack: 13896 kB' 'PageTables: 5856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 665776 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 400616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@33 -- # echo 0 00:02:55.358 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.358 16:01:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.358 16:01:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.358 16:01:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.358 16:01:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:55.358 16:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.358 16:01:54 -- setup/common.sh@18 -- # local node=1 00:02:55.358 16:01:54 -- setup/common.sh@19 -- # local var val 00:02:55.358 16:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.358 16:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.358 16:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:55.358 16:01:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:55.358 16:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.358 16:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 53593556 kB' 'MemUsed: 7088452 kB' 'SwapCached: 0 kB' 'Active: 2859952 kB' 'Inactive: 1150024 kB' 'Active(anon): 2696768 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1150024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3799348 kB' 'Mapped: 78652 kB' 'AnonPages: 210796 kB' 'Shmem: 2486140 kB' 'KernelStack: 10936 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315304 kB' 'Slab: 624016 kB' 'SReclaimable: 315304 kB' 'SUnreclaim: 308712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.358 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # continue 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 16:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 16:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.359 16:01:54 -- setup/common.sh@33 -- # echo 0 00:02:55.359 16:01:54 -- setup/common.sh@33 -- # return 0 00:02:55.359 16:01:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.359 16:01:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.359 16:01:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.359 16:01:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:55.359 node0=512 expecting 513 00:02:55.359 16:01:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.359 16:01:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.359 16:01:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.359 16:01:54 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:55.359 node1=513 expecting 512 00:02:55.359 16:01:54 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:55.359 00:02:55.359 real 0m2.639s 00:02:55.359 user 0m0.861s 00:02:55.359 sys 0m1.511s 00:02:55.359 16:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.359 16:01:54 -- common/autotest_common.sh@10 -- # set +x 00:02:55.359 ************************************ 00:02:55.359 END TEST odd_alloc 00:02:55.359 ************************************ 00:02:55.359 16:01:54 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:55.359 16:01:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:55.359 16:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:55.359 16:01:54 -- common/autotest_common.sh@10 -- # set +x 00:02:55.359 ************************************ 00:02:55.359 START TEST custom_alloc 00:02:55.359 ************************************ 00:02:55.359 16:01:54 -- common/autotest_common.sh@1104 -- # custom_alloc 00:02:55.359 16:01:54 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:55.359 16:01:54 -- setup/hugepages.sh@169 -- # local node 00:02:55.359 16:01:54 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:55.359 16:01:54 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:55.359 16:01:54 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:55.359 16:01:54 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:55.359 16:01:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.359 16:01:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.359 16:01:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:55.359 16:01:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.359 16:01:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.359 16:01:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.359 16:01:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.359 16:01:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.359 16:01:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.359 16:01:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:55.359 16:01:54 -- setup/hugepages.sh@83 -- # : 256 00:02:55.359 16:01:54 -- setup/hugepages.sh@84 -- # : 1 00:02:55.359 16:01:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:55.359 16:01:54 -- setup/hugepages.sh@83 -- # : 0 00:02:55.359 16:01:54 -- setup/hugepages.sh@84 -- # : 0 00:02:55.359 16:01:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:55.359 16:01:54 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:55.359 16:01:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:55.359 16:01:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:55.359 16:01:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:55.359 16:01:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.359 16:01:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.359 16:01:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.359 16:01:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.359 16:01:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.359 16:01:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.359 16:01:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.359 16:01:54 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:55.360 16:01:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.360 16:01:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:55.360 16:01:54 -- setup/hugepages.sh@78 -- # return 0 00:02:55.360 16:01:54 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:55.360 16:01:54 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:55.360 16:01:54 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:55.360 16:01:54 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:55.360 16:01:54 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:55.360 16:01:54 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:55.360 16:01:54 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:55.360 16:01:54 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:55.360 16:01:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:55.360 16:01:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.360 16:01:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.360 16:01:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.360 16:01:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.360 16:01:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.360 16:01:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:55.360 16:01:54 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:55.360 16:01:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.360 16:01:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:55.360 16:01:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:55.360 16:01:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:55.360 16:01:54 -- setup/hugepages.sh@78 -- # return 0 00:02:55.360 16:01:54 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:55.360 16:01:54 -- setup/hugepages.sh@187 -- # setup output 00:02:55.360 16:01:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.360 16:01:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:57.909 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:57.909 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:57.909 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:57.909 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:57.909 16:01:56 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:57.909 16:01:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:57.909 16:01:56 -- setup/hugepages.sh@89 -- # local node 00:02:57.909 16:01:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.909 16:01:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.909 16:01:56 -- setup/hugepages.sh@92 -- # local surp 00:02:57.909 16:01:56 -- setup/hugepages.sh@93 -- # local resv 00:02:57.909 16:01:56 -- setup/hugepages.sh@94 -- # local anon 00:02:57.909 16:01:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.909 16:01:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.909 16:01:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.909 16:01:56 -- setup/common.sh@18 -- # local node= 00:02:57.909 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.909 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.909 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.909 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.909 16:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.909 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.909 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105553504 kB' 'MemAvailable: 110265680 kB' 'Buffers: 2780 kB' 'Cached: 13390504 kB' 'SwapCached: 0 kB' 'Active: 9446676 kB' 'Inactive: 4601696 kB' 'Active(anon): 8875316 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664392 kB' 'Mapped: 206736 kB' 'Shmem: 8220228 kB' 'KReclaimable: 580464 kB' 'Slab: 1291356 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710892 kB' 'KernelStack: 24944 kB' 'PageTables: 9860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10513396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230616 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.909 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.909 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.910 16:01:56 -- setup/common.sh@33 -- # echo 0 00:02:57.910 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.910 16:01:56 -- setup/hugepages.sh@97 -- # anon=0 00:02:57.910 16:01:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.910 16:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.910 16:01:56 -- setup/common.sh@18 -- # local node= 00:02:57.910 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.910 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.910 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.910 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.910 16:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.910 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.910 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105553040 kB' 'MemAvailable: 110265216 kB' 'Buffers: 2780 kB' 'Cached: 13390504 kB' 'SwapCached: 0 kB' 'Active: 9447632 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876272 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664888 kB' 'Mapped: 206816 kB' 'Shmem: 8220228 kB' 'KReclaimable: 580464 kB' 'Slab: 1291428 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710964 kB' 'KernelStack: 24944 kB' 'PageTables: 9868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10513408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.910 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.910 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.911 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.911 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.911 16:01:56 -- setup/common.sh@33 -- # echo 0 00:02:57.911 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.911 16:01:56 -- setup/hugepages.sh@99 -- # surp=0 00:02:57.911 16:01:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.911 16:01:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.911 16:01:56 -- setup/common.sh@18 -- # local node= 00:02:57.911 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.911 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.911 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.911 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.911 16:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.912 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.912 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105557084 kB' 'MemAvailable: 110269260 kB' 'Buffers: 2780 kB' 'Cached: 13390504 kB' 'SwapCached: 0 kB' 'Active: 9447312 kB' 'Inactive: 4601696 kB' 'Active(anon): 8875952 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664556 kB' 'Mapped: 206816 kB' 'Shmem: 8220228 kB' 'KReclaimable: 580464 kB' 'Slab: 1291412 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710948 kB' 'KernelStack: 24880 kB' 'PageTables: 9660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10513420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230552 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.912 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.912 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.913 16:01:56 -- setup/common.sh@33 -- # echo 0 00:02:57.913 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.913 16:01:56 -- setup/hugepages.sh@100 -- # resv=0 00:02:57.913 16:01:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:57.913 nr_hugepages=1536 00:02:57.913 16:01:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.913 resv_hugepages=0 00:02:57.913 16:01:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.913 surplus_hugepages=0 00:02:57.913 16:01:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.913 anon_hugepages=0 00:02:57.913 16:01:56 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:57.913 16:01:56 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:57.913 16:01:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.913 16:01:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.913 16:01:56 -- setup/common.sh@18 -- # local node= 00:02:57.913 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.913 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.913 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.913 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.913 16:01:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.913 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.913 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105556832 kB' 'MemAvailable: 110269008 kB' 'Buffers: 2780 kB' 'Cached: 13390524 kB' 'SwapCached: 0 kB' 'Active: 9447092 kB' 'Inactive: 4601696 kB' 'Active(anon): 8875732 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664196 kB' 'Mapped: 206816 kB' 'Shmem: 8220248 kB' 'KReclaimable: 580464 kB' 'Slab: 1291412 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 710948 kB' 'KernelStack: 24896 kB' 'PageTables: 9692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10513436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230552 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.913 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.913 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.914 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.914 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.914 16:01:56 -- setup/common.sh@33 -- # echo 1536 00:02:57.914 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.914 16:01:56 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:57.915 16:01:56 -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.915 16:01:56 -- setup/hugepages.sh@27 -- # local node 00:02:57.915 16:01:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.915 16:01:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.915 16:01:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.915 16:01:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:57.915 16:01:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.915 16:01:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.915 16:01:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.915 16:01:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.915 16:01:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.915 16:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.915 16:01:56 -- setup/common.sh@18 -- # local node=0 00:02:57.915 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.915 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.915 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.915 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.915 16:01:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.915 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.915 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 53006176 kB' 'MemUsed: 12749804 kB' 'SwapCached: 0 kB' 'Active: 6585984 kB' 'Inactive: 3451672 kB' 'Active(anon): 6177808 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593916 kB' 'Mapped: 128084 kB' 'AnonPages: 452780 kB' 'Shmem: 5734068 kB' 'KernelStack: 13880 kB' 'PageTables: 5768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 666060 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 400900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.915 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.915 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@33 -- # echo 0 00:02:57.916 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.916 16:01:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.916 16:01:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.916 16:01:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.916 16:01:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.916 16:01:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.916 16:01:56 -- setup/common.sh@18 -- # local node=1 00:02:57.916 16:01:56 -- setup/common.sh@19 -- # local var val 00:02:57.916 16:01:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.916 16:01:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.916 16:01:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.916 16:01:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.916 16:01:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.916 16:01:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 52550596 kB' 'MemUsed: 8131412 kB' 'SwapCached: 0 kB' 'Active: 2860236 kB' 'Inactive: 1150024 kB' 'Active(anon): 2697052 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1150024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3799404 kB' 'Mapped: 78652 kB' 'AnonPages: 211016 kB' 'Shmem: 2486196 kB' 'KernelStack: 11016 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315304 kB' 'Slab: 625360 kB' 'SReclaimable: 315304 kB' 'SUnreclaim: 310056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.916 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.916 16:01:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # continue 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.917 16:01:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.917 16:01:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.917 16:01:56 -- setup/common.sh@33 -- # echo 0 00:02:57.917 16:01:56 -- setup/common.sh@33 -- # return 0 00:02:57.917 16:01:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.917 16:01:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.917 16:01:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.917 16:01:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.917 16:01:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:57.917 node0=512 expecting 512 00:02:57.917 16:01:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.917 16:01:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.917 16:01:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.917 16:01:56 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:57.917 node1=1024 expecting 1024 00:02:57.917 16:01:56 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:57.917 00:02:57.917 real 0m2.658s 00:02:57.917 user 0m0.880s 00:02:57.917 sys 0m1.528s 00:02:57.917 16:01:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.917 16:01:56 -- common/autotest_common.sh@10 -- # set +x 00:02:57.917 ************************************ 00:02:57.917 END TEST custom_alloc 00:02:57.917 ************************************ 00:02:58.180 16:01:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:58.180 16:01:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:58.180 16:01:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:58.180 16:01:56 -- common/autotest_common.sh@10 -- # set +x 00:02:58.180 ************************************ 00:02:58.180 START TEST no_shrink_alloc 00:02:58.180 ************************************ 00:02:58.180 16:01:56 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:02:58.180 16:01:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:58.180 16:01:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.180 16:01:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:58.180 16:01:56 -- setup/hugepages.sh@51 -- # shift 00:02:58.180 16:01:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:58.180 16:01:56 -- setup/hugepages.sh@52 -- # local node_ids 00:02:58.180 16:01:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.180 16:01:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.180 16:01:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:58.180 16:01:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:58.180 16:01:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.180 16:01:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.180 16:01:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.180 16:01:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.180 16:01:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.180 16:01:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:58.180 16:01:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:58.180 16:01:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:58.180 16:01:56 -- setup/hugepages.sh@73 -- # return 0 00:02:58.180 16:01:56 -- setup/hugepages.sh@198 -- # setup output 00:02:58.180 16:01:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.180 16:01:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:00.728 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.728 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:00.728 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.728 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.728 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.728 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.728 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.728 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.729 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.729 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.729 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.729 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.729 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:00.993 16:01:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:00.993 16:01:59 -- setup/hugepages.sh@89 -- # local node 00:03:00.993 16:01:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.993 16:01:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.993 16:01:59 -- setup/hugepages.sh@92 -- # local surp 00:03:00.993 16:01:59 -- setup/hugepages.sh@93 -- # local resv 00:03:00.993 16:01:59 -- setup/hugepages.sh@94 -- # local anon 00:03:00.994 16:01:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.994 16:01:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.994 16:01:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.994 16:01:59 -- setup/common.sh@18 -- # local node= 00:03:00.994 16:01:59 -- setup/common.sh@19 -- # local var val 00:03:00.994 16:01:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.994 16:01:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.994 16:01:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.994 16:01:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.994 16:01:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.994 16:01:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106603624 kB' 'MemAvailable: 111315800 kB' 'Buffers: 2780 kB' 'Cached: 13390640 kB' 'SwapCached: 0 kB' 'Active: 9447956 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876596 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665160 kB' 'Mapped: 206832 kB' 'Shmem: 8220364 kB' 'KReclaimable: 580464 kB' 'Slab: 1290348 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709884 kB' 'KernelStack: 24944 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10513872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.994 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.994 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.995 16:01:59 -- setup/common.sh@33 -- # echo 0 00:03:00.995 16:01:59 -- setup/common.sh@33 -- # return 0 00:03:00.995 16:01:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:00.995 16:01:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.995 16:01:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.995 16:01:59 -- setup/common.sh@18 -- # local node= 00:03:00.995 16:01:59 -- setup/common.sh@19 -- # local var val 00:03:00.995 16:01:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.995 16:01:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.995 16:01:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.995 16:01:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.995 16:01:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.995 16:01:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106602760 kB' 'MemAvailable: 111314936 kB' 'Buffers: 2780 kB' 'Cached: 13390640 kB' 'SwapCached: 0 kB' 'Active: 9448676 kB' 'Inactive: 4601696 kB' 'Active(anon): 8877316 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665876 kB' 'Mapped: 206832 kB' 'Shmem: 8220364 kB' 'KReclaimable: 580464 kB' 'Slab: 1290364 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709900 kB' 'KernelStack: 24976 kB' 'PageTables: 9856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10513884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230568 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.995 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.995 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.996 16:01:59 -- setup/common.sh@33 -- # echo 0 00:03:00.996 16:01:59 -- setup/common.sh@33 -- # return 0 00:03:00.996 16:01:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:00.996 16:01:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.996 16:01:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.996 16:01:59 -- setup/common.sh@18 -- # local node= 00:03:00.996 16:01:59 -- setup/common.sh@19 -- # local var val 00:03:00.996 16:01:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.996 16:01:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.996 16:01:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.996 16:01:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.996 16:01:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.996 16:01:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106606212 kB' 'MemAvailable: 111318388 kB' 'Buffers: 2780 kB' 'Cached: 13390640 kB' 'SwapCached: 0 kB' 'Active: 9447648 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876288 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664812 kB' 'Mapped: 206828 kB' 'Shmem: 8220364 kB' 'KReclaimable: 580464 kB' 'Slab: 1290320 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709856 kB' 'KernelStack: 24928 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10513896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230552 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.996 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.996 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.997 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.997 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.998 16:01:59 -- setup/common.sh@33 -- # echo 0 00:03:00.998 16:01:59 -- setup/common.sh@33 -- # return 0 00:03:00.998 16:01:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:00.998 16:01:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.998 nr_hugepages=1024 00:03:00.998 16:01:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.998 resv_hugepages=0 00:03:00.998 16:01:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.998 surplus_hugepages=0 00:03:00.998 16:01:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.998 anon_hugepages=0 00:03:00.998 16:01:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.998 16:01:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.998 16:01:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.998 16:01:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.998 16:01:59 -- setup/common.sh@18 -- # local node= 00:03:00.998 16:01:59 -- setup/common.sh@19 -- # local var val 00:03:00.998 16:01:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.998 16:01:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.998 16:01:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.998 16:01:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.998 16:01:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.998 16:01:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106605208 kB' 'MemAvailable: 111317384 kB' 'Buffers: 2780 kB' 'Cached: 13390664 kB' 'SwapCached: 0 kB' 'Active: 9447816 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876456 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665428 kB' 'Mapped: 206752 kB' 'Shmem: 8220388 kB' 'KReclaimable: 580464 kB' 'Slab: 1290296 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709832 kB' 'KernelStack: 24928 kB' 'PageTables: 9676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10513912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230552 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.998 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.998 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # continue 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.999 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.999 16:01:59 -- setup/common.sh@33 -- # echo 1024 00:03:00.999 16:01:59 -- setup/common.sh@33 -- # return 0 00:03:00.999 16:01:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.999 16:01:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.999 16:01:59 -- setup/hugepages.sh@27 -- # local node 00:03:00.999 16:01:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.999 16:01:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:00.999 16:01:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.999 16:01:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.999 16:01:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.999 16:01:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.999 16:01:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.999 16:01:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.999 16:01:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.999 16:01:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.999 16:01:59 -- setup/common.sh@18 -- # local node=0 00:03:00.999 16:01:59 -- setup/common.sh@19 -- # local var val 00:03:00.999 16:01:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.999 16:01:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.999 16:01:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.999 16:01:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.999 16:01:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.999 16:01:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.999 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51961648 kB' 'MemUsed: 13794332 kB' 'SwapCached: 0 kB' 'Active: 6586148 kB' 'Inactive: 3451672 kB' 'Active(anon): 6177972 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9593980 kB' 'Mapped: 128100 kB' 'AnonPages: 452996 kB' 'Shmem: 5734132 kB' 'KernelStack: 13944 kB' 'PageTables: 5952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 665568 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 400408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # continue 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.000 16:01:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.000 16:01:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.000 16:01:59 -- setup/common.sh@33 -- # echo 0 00:03:01.000 16:01:59 -- setup/common.sh@33 -- # return 0 00:03:01.000 16:01:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.000 16:01:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.000 16:01:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.000 16:01:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.000 16:01:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.000 node0=1024 expecting 1024 00:03:01.000 16:01:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.000 16:01:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:01.000 16:01:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:01.000 16:01:59 -- setup/hugepages.sh@202 -- # setup output 00:03:01.000 16:01:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.000 16:01:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:03.549 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.549 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:03.549 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.549 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.550 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.550 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.550 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.550 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.812 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.812 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.812 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:03.812 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:03.812 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:03.812 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:03.812 16:02:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:03.812 16:02:02 -- setup/hugepages.sh@89 -- # local node 00:03:03.812 16:02:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.812 16:02:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.812 16:02:02 -- setup/hugepages.sh@92 -- # local surp 00:03:03.812 16:02:02 -- setup/hugepages.sh@93 -- # local resv 00:03:03.812 16:02:02 -- setup/hugepages.sh@94 -- # local anon 00:03:03.812 16:02:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.812 16:02:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.812 16:02:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.812 16:02:02 -- setup/common.sh@18 -- # local node= 00:03:03.812 16:02:02 -- setup/common.sh@19 -- # local var val 00:03:03.813 16:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.813 16:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.813 16:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.813 16:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.813 16:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.813 16:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106626728 kB' 'MemAvailable: 111338904 kB' 'Buffers: 2780 kB' 'Cached: 13390756 kB' 'SwapCached: 0 kB' 'Active: 9447928 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876568 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664384 kB' 'Mapped: 206856 kB' 'Shmem: 8220480 kB' 'KReclaimable: 580464 kB' 'Slab: 1290108 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709644 kB' 'KernelStack: 25008 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10514524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230616 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.813 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.813 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # continue 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.814 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.814 16:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.814 16:02:02 -- setup/common.sh@33 -- # echo 0 00:03:03.814 16:02:02 -- setup/common.sh@33 -- # return 0 00:03:03.814 16:02:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.814 16:02:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.814 16:02:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.814 16:02:02 -- setup/common.sh@18 -- # local node= 00:03:03.814 16:02:02 -- setup/common.sh@19 -- # local var val 00:03:03.814 16:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.814 16:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.814 16:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.814 16:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.814 16:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.814 16:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.079 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.079 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.079 16:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106625972 kB' 'MemAvailable: 111338148 kB' 'Buffers: 2780 kB' 'Cached: 13390756 kB' 'SwapCached: 0 kB' 'Active: 9448588 kB' 'Inactive: 4601696 kB' 'Active(anon): 8877228 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665064 kB' 'Mapped: 206856 kB' 'Shmem: 8220480 kB' 'KReclaimable: 580464 kB' 'Slab: 1290080 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709616 kB' 'KernelStack: 25008 kB' 'PageTables: 9828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10514536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230616 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:04.079 16:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.079 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.079 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.079 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.079 16:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.079 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.079 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.080 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.080 16:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.081 16:02:02 -- setup/common.sh@33 -- # echo 0 00:03:04.081 16:02:02 -- setup/common.sh@33 -- # return 0 00:03:04.081 16:02:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:04.081 16:02:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.081 16:02:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.081 16:02:02 -- setup/common.sh@18 -- # local node= 00:03:04.081 16:02:02 -- setup/common.sh@19 -- # local var val 00:03:04.081 16:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.081 16:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.081 16:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.081 16:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.081 16:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.081 16:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106626928 kB' 'MemAvailable: 111339104 kB' 'Buffers: 2780 kB' 'Cached: 13390768 kB' 'SwapCached: 0 kB' 'Active: 9447904 kB' 'Inactive: 4601696 kB' 'Active(anon): 8876544 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664840 kB' 'Mapped: 206840 kB' 'Shmem: 8220492 kB' 'KReclaimable: 580464 kB' 'Slab: 1290080 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709616 kB' 'KernelStack: 24976 kB' 'PageTables: 9732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10514548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.081 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.081 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.082 16:02:02 -- setup/common.sh@33 -- # echo 0 00:03:04.082 16:02:02 -- setup/common.sh@33 -- # return 0 00:03:04.082 16:02:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:04.082 16:02:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.082 nr_hugepages=1024 00:03:04.082 16:02:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.082 resv_hugepages=0 00:03:04.082 16:02:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.082 surplus_hugepages=0 00:03:04.082 16:02:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.082 anon_hugepages=0 00:03:04.082 16:02:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.082 16:02:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.082 16:02:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.082 16:02:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.082 16:02:02 -- setup/common.sh@18 -- # local node= 00:03:04.082 16:02:02 -- setup/common.sh@19 -- # local var val 00:03:04.082 16:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.082 16:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.082 16:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.082 16:02:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.082 16:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.082 16:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106627180 kB' 'MemAvailable: 111339356 kB' 'Buffers: 2780 kB' 'Cached: 13390768 kB' 'SwapCached: 0 kB' 'Active: 9447076 kB' 'Inactive: 4601696 kB' 'Active(anon): 8875716 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601696 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664468 kB' 'Mapped: 206764 kB' 'Shmem: 8220492 kB' 'KReclaimable: 580464 kB' 'Slab: 1290080 kB' 'SReclaimable: 580464 kB' 'SUnreclaim: 709616 kB' 'KernelStack: 24960 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10514564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 185856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.082 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.082 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.083 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.083 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.084 16:02:02 -- setup/common.sh@33 -- # echo 1024 00:03:04.084 16:02:02 -- setup/common.sh@33 -- # return 0 00:03:04.084 16:02:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.084 16:02:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.084 16:02:02 -- setup/hugepages.sh@27 -- # local node 00:03:04.084 16:02:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.084 16:02:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.084 16:02:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.084 16:02:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.084 16:02:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.084 16:02:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.084 16:02:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.084 16:02:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.084 16:02:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.084 16:02:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.084 16:02:02 -- setup/common.sh@18 -- # local node=0 00:03:04.084 16:02:02 -- setup/common.sh@19 -- # local var val 00:03:04.084 16:02:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.084 16:02:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.084 16:02:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.084 16:02:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.084 16:02:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.084 16:02:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51954440 kB' 'MemUsed: 13801540 kB' 'SwapCached: 0 kB' 'Active: 6587228 kB' 'Inactive: 3451672 kB' 'Active(anon): 6179052 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3451672 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9594044 kB' 'Mapped: 128112 kB' 'AnonPages: 453992 kB' 'Shmem: 5734196 kB' 'KernelStack: 13928 kB' 'PageTables: 5864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 265160 kB' 'Slab: 665308 kB' 'SReclaimable: 265160 kB' 'SUnreclaim: 400148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.084 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.084 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # continue 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.085 16:02:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.085 16:02:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.085 16:02:02 -- setup/common.sh@33 -- # echo 0 00:03:04.085 16:02:02 -- setup/common.sh@33 -- # return 0 00:03:04.085 16:02:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.085 16:02:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.085 16:02:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.085 16:02:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.085 16:02:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.085 node0=1024 expecting 1024 00:03:04.085 16:02:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.085 00:03:04.085 real 0m5.945s 00:03:04.085 user 0m1.918s 00:03:04.085 sys 0m3.567s 00:03:04.085 16:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.085 16:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:04.085 ************************************ 00:03:04.085 END TEST no_shrink_alloc 00:03:04.085 ************************************ 00:03:04.085 16:02:02 -- setup/hugepages.sh@217 -- # clear_hp 00:03:04.085 16:02:02 -- setup/hugepages.sh@37 -- # local node hp 00:03:04.085 16:02:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.085 16:02:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.085 16:02:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.085 16:02:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.085 16:02:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.085 16:02:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.085 16:02:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.085 16:02:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.085 16:02:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.085 16:02:02 -- setup/hugepages.sh@41 -- # echo 0 00:03:04.085 16:02:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.085 16:02:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.085 00:03:04.085 real 0m21.701s 00:03:04.085 user 0m6.765s 00:03:04.085 sys 0m12.250s 00:03:04.085 16:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.085 16:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:04.085 ************************************ 00:03:04.085 END TEST hugepages 00:03:04.085 ************************************ 00:03:04.085 16:02:02 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:04.085 16:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.085 16:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.085 16:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:04.085 ************************************ 00:03:04.085 START TEST driver 00:03:04.085 ************************************ 00:03:04.085 16:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:04.085 * Looking for test storage... 00:03:04.085 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:04.085 16:02:02 -- setup/driver.sh@68 -- # setup reset 00:03:04.085 16:02:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.085 16:02:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.295 16:02:07 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:08.295 16:02:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.295 16:02:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.295 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:08.295 ************************************ 00:03:08.295 START TEST guess_driver 00:03:08.295 ************************************ 00:03:08.295 16:02:07 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:08.295 16:02:07 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:08.295 16:02:07 -- setup/driver.sh@47 -- # local fail=0 00:03:08.295 16:02:07 -- setup/driver.sh@49 -- # pick_driver 00:03:08.295 16:02:07 -- setup/driver.sh@36 -- # vfio 00:03:08.295 16:02:07 -- setup/driver.sh@21 -- # local iommu_grups 00:03:08.295 16:02:07 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:08.295 16:02:07 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:08.295 16:02:07 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:08.295 16:02:07 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:08.295 16:02:07 -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:03:08.295 16:02:07 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:08.295 16:02:07 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:08.295 16:02:07 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:08.295 16:02:07 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:08.295 16:02:07 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:08.295 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:08.295 16:02:07 -- setup/driver.sh@30 -- # return 0 00:03:08.295 16:02:07 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:08.295 16:02:07 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:08.295 16:02:07 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:08.295 16:02:07 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:08.295 Looking for driver=vfio-pci 00:03:08.295 16:02:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.295 16:02:07 -- setup/driver.sh@45 -- # setup output config 00:03:08.295 16:02:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.295 16:02:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.604 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.604 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.604 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.866 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.866 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.866 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.866 16:02:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:11.866 16:02:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:11.866 16:02:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.440 16:02:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.440 16:02:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.440 16:02:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.440 16:02:11 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.440 16:02:11 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.440 16:02:11 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.701 16:02:11 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:12.701 16:02:11 -- setup/driver.sh@65 -- # setup reset 00:03:12.701 16:02:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.701 16:02:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.919 00:03:16.919 real 0m8.571s 00:03:16.919 user 0m2.013s 00:03:16.919 sys 0m4.159s 00:03:16.919 16:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.919 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:16.919 ************************************ 00:03:16.919 END TEST guess_driver 00:03:16.919 ************************************ 00:03:16.919 00:03:16.919 real 0m12.882s 00:03:16.919 user 0m3.032s 00:03:16.919 sys 0m6.281s 00:03:16.919 16:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.919 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:16.919 ************************************ 00:03:16.919 END TEST driver 00:03:16.919 ************************************ 00:03:16.919 16:02:15 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:16.919 16:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.919 16:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.919 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:16.919 ************************************ 00:03:16.919 START TEST devices 00:03:16.919 ************************************ 00:03:16.919 16:02:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:17.180 * Looking for test storage... 00:03:17.181 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:17.181 16:02:15 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:17.181 16:02:15 -- setup/devices.sh@192 -- # setup reset 00:03:17.181 16:02:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.181 16:02:15 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.484 16:02:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:20.484 16:02:18 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:20.484 16:02:18 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:20.484 16:02:18 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:20.484 16:02:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:20.484 16:02:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:20.484 16:02:18 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:20.484 16:02:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.484 16:02:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:20.484 16:02:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:20.484 16:02:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:03:20.484 16:02:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:03:20.484 16:02:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:20.484 16:02:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:20.484 16:02:18 -- setup/devices.sh@196 -- # blocks=() 00:03:20.484 16:02:18 -- setup/devices.sh@196 -- # declare -a blocks 00:03:20.484 16:02:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:20.484 16:02:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:20.484 16:02:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:20.484 16:02:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:20.484 16:02:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:20.484 16:02:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:20.484 16:02:18 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:20.484 16:02:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:20.484 16:02:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:20.484 16:02:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:20.484 16:02:18 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:20.484 No valid GPT data, bailing 00:03:20.484 16:02:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.484 16:02:18 -- scripts/common.sh@393 -- # pt= 00:03:20.484 16:02:18 -- scripts/common.sh@394 -- # return 1 00:03:20.484 16:02:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:20.484 16:02:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:20.484 16:02:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:20.484 16:02:18 -- setup/common.sh@80 -- # echo 960197124096 00:03:20.484 16:02:18 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:20.484 16:02:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:20.484 16:02:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:20.484 16:02:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:20.484 16:02:18 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:20.484 16:02:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:20.484 16:02:18 -- setup/devices.sh@202 -- # pci=0000:03:00.0 00:03:20.484 16:02:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:03:20.484 16:02:18 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:20.484 16:02:18 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:20.484 16:02:18 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:20.484 No valid GPT data, bailing 00:03:20.484 16:02:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:20.484 16:02:18 -- scripts/common.sh@393 -- # pt= 00:03:20.485 16:02:18 -- scripts/common.sh@394 -- # return 1 00:03:20.485 16:02:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:20.485 16:02:18 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:20.485 16:02:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:20.485 16:02:18 -- setup/common.sh@80 -- # echo 960197124096 00:03:20.485 16:02:18 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:20.485 16:02:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:20.485 16:02:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:03:00.0 00:03:20.485 16:02:18 -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:20.485 16:02:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:20.485 16:02:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:20.485 16:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.485 16:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.485 16:02:18 -- common/autotest_common.sh@10 -- # set +x 00:03:20.485 ************************************ 00:03:20.485 START TEST nvme_mount 00:03:20.485 ************************************ 00:03:20.485 16:02:18 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:20.485 16:02:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:20.485 16:02:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:20.485 16:02:18 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.485 16:02:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.485 16:02:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:20.485 16:02:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:20.485 16:02:18 -- setup/common.sh@40 -- # local part_no=1 00:03:20.485 16:02:18 -- setup/common.sh@41 -- # local size=1073741824 00:03:20.485 16:02:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:20.485 16:02:18 -- setup/common.sh@44 -- # parts=() 00:03:20.485 16:02:18 -- setup/common.sh@44 -- # local parts 00:03:20.485 16:02:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:20.485 16:02:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.485 16:02:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.485 16:02:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:20.485 16:02:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.485 16:02:18 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:20.485 16:02:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:20.485 16:02:19 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:21.430 Creating new GPT entries in memory. 00:03:21.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:21.430 other utilities. 00:03:21.430 16:02:20 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:21.430 16:02:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.430 16:02:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.430 16:02:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.430 16:02:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:22.375 Creating new GPT entries in memory. 00:03:22.375 The operation has completed successfully. 00:03:22.375 16:02:21 -- setup/common.sh@57 -- # (( part++ )) 00:03:22.375 16:02:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.375 16:02:21 -- setup/common.sh@62 -- # wait 2847728 00:03:22.375 16:02:21 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.375 16:02:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:22.375 16:02:21 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.375 16:02:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:22.375 16:02:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:22.375 16:02:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.375 16:02:21 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.375 16:02:21 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:22.375 16:02:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:22.375 16:02:21 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.375 16:02:21 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.375 16:02:21 -- setup/devices.sh@53 -- # local found=0 00:03:22.375 16:02:21 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.375 16:02:21 -- setup/devices.sh@56 -- # : 00:03:22.375 16:02:21 -- setup/devices.sh@59 -- # local pci status 00:03:22.375 16:02:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.375 16:02:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:22.375 16:02:21 -- setup/devices.sh@47 -- # setup output config 00:03:22.375 16:02:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.375 16:02:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:24.924 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.924 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.924 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.924 16:02:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:24.924 16:02:23 -- setup/devices.sh@63 -- # found=1 00:03:24.924 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.924 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.925 16:02:23 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.925 16:02:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.187 16:02:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.187 16:02:23 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:25.187 16:02:23 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.187 16:02:23 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:25.187 16:02:23 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.187 16:02:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:25.187 16:02:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.187 16:02:23 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.187 16:02:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:25.187 16:02:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:25.187 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:25.187 16:02:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:25.187 16:02:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:25.448 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:25.448 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:25.448 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:25.448 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:25.448 16:02:24 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:25.448 16:02:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:25.448 16:02:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.448 16:02:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:25.448 16:02:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:25.448 16:02:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.448 16:02:24 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.448 16:02:24 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:25.448 16:02:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:25.448 16:02:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.448 16:02:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.448 16:02:24 -- setup/devices.sh@53 -- # local found=0 00:03:25.448 16:02:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:25.448 16:02:24 -- setup/devices.sh@56 -- # : 00:03:25.448 16:02:24 -- setup/devices.sh@59 -- # local pci status 00:03:25.448 16:02:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.448 16:02:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:25.448 16:02:24 -- setup/devices.sh@47 -- # setup output config 00:03:25.448 16:02:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.448 16:02:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:27.995 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.995 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.995 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:27.996 16:02:26 -- setup/devices.sh@63 -- # found=1 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.996 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:27.996 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.257 16:02:26 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:28.257 16:02:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.519 16:02:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:28.519 16:02:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:28.519 16:02:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.519 16:02:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.519 16:02:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.519 16:02:27 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.519 16:02:27 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:28.519 16:02:27 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:28.519 16:02:27 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:28.519 16:02:27 -- setup/devices.sh@50 -- # local mount_point= 00:03:28.519 16:02:27 -- setup/devices.sh@51 -- # local test_file= 00:03:28.519 16:02:27 -- setup/devices.sh@53 -- # local found=0 00:03:28.519 16:02:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:28.519 16:02:27 -- setup/devices.sh@59 -- # local pci status 00:03:28.519 16:02:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.519 16:02:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:28.519 16:02:27 -- setup/devices.sh@47 -- # setup output config 00:03:28.519 16:02:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.519 16:02:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:31.069 16:02:29 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.069 16:02:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:31.330 16:02:30 -- setup/devices.sh@63 -- # found=1 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.330 16:02:30 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:31.330 16:02:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.590 16:02:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.590 16:02:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:31.590 16:02:30 -- setup/devices.sh@68 -- # return 0 00:03:31.590 16:02:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:31.590 16:02:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.590 16:02:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.590 16:02:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.590 16:02:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.590 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.590 00:03:31.590 real 0m11.468s 00:03:31.590 user 0m2.915s 00:03:31.590 sys 0m5.751s 00:03:31.590 16:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.590 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:03:31.590 ************************************ 00:03:31.590 END TEST nvme_mount 00:03:31.590 ************************************ 00:03:31.590 16:02:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:31.590 16:02:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.590 16:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.590 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:03:31.590 ************************************ 00:03:31.590 START TEST dm_mount 00:03:31.590 ************************************ 00:03:31.590 16:02:30 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:31.590 16:02:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:31.590 16:02:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:31.590 16:02:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:31.590 16:02:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:31.590 16:02:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.590 16:02:30 -- setup/common.sh@40 -- # local part_no=2 00:03:31.590 16:02:30 -- setup/common.sh@41 -- # local size=1073741824 00:03:31.590 16:02:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.590 16:02:30 -- setup/common.sh@44 -- # parts=() 00:03:31.590 16:02:30 -- setup/common.sh@44 -- # local parts 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.590 16:02:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.590 16:02:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.590 16:02:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.590 16:02:30 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.590 16:02:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.590 16:02:30 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:32.977 Creating new GPT entries in memory. 00:03:32.977 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.977 other utilities. 00:03:32.977 16:02:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.977 16:02:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.977 16:02:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.977 16:02:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.977 16:02:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.920 Creating new GPT entries in memory. 00:03:33.920 The operation has completed successfully. 00:03:33.920 16:02:32 -- setup/common.sh@57 -- # (( part++ )) 00:03:33.920 16:02:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.920 16:02:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.920 16:02:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.920 16:02:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:34.865 The operation has completed successfully. 00:03:34.865 16:02:33 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.865 16:02:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.865 16:02:33 -- setup/common.sh@62 -- # wait 2852839 00:03:34.865 16:02:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:34.865 16:02:33 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.865 16:02:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.865 16:02:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:34.865 16:02:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:34.865 16:02:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.865 16:02:33 -- setup/devices.sh@161 -- # break 00:03:34.865 16:02:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.865 16:02:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:34.865 16:02:33 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:34.865 16:02:33 -- setup/devices.sh@166 -- # dm=dm-0 00:03:34.865 16:02:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:34.865 16:02:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:34.865 16:02:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.865 16:02:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:34.865 16:02:33 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.865 16:02:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.865 16:02:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:34.865 16:02:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.865 16:02:33 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.865 16:02:33 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:34.865 16:02:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:34.865 16:02:33 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:34.865 16:02:33 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.865 16:02:33 -- setup/devices.sh@53 -- # local found=0 00:03:34.865 16:02:33 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:34.865 16:02:33 -- setup/devices.sh@56 -- # : 00:03:34.865 16:02:33 -- setup/devices.sh@59 -- # local pci status 00:03:34.865 16:02:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.865 16:02:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:34.865 16:02:33 -- setup/devices.sh@47 -- # setup output config 00:03:34.865 16:02:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.866 16:02:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:37.414 16:02:35 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:37.414 16:02:36 -- setup/devices.sh@63 -- # found=1 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.414 16:02:36 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:37.414 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.676 16:02:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.676 16:02:36 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:37.676 16:02:36 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:37.676 16:02:36 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.676 16:02:36 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.676 16:02:36 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:37.676 16:02:36 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:37.676 16:02:36 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:37.676 16:02:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:37.676 16:02:36 -- setup/devices.sh@50 -- # local mount_point= 00:03:37.676 16:02:36 -- setup/devices.sh@51 -- # local test_file= 00:03:37.676 16:02:36 -- setup/devices.sh@53 -- # local found=0 00:03:37.676 16:02:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.676 16:02:36 -- setup/devices.sh@59 -- # local pci status 00:03:37.676 16:02:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.676 16:02:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:37.676 16:02:36 -- setup/devices.sh@47 -- # setup output config 00:03:37.676 16:02:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.676 16:02:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:40.227 16:02:38 -- setup/devices.sh@63 -- # found=1 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.227 16:02:38 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:40.227 16:02:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.489 16:02:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.489 16:02:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.489 16:02:39 -- setup/devices.sh@68 -- # return 0 00:03:40.489 16:02:39 -- setup/devices.sh@187 -- # cleanup_dm 00:03:40.489 16:02:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:40.489 16:02:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.489 16:02:39 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:40.489 16:02:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.489 16:02:39 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:40.489 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.489 16:02:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.489 16:02:39 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:40.489 00:03:40.489 real 0m8.750s 00:03:40.489 user 0m1.822s 00:03:40.489 sys 0m3.479s 00:03:40.489 16:02:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.489 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.489 ************************************ 00:03:40.489 END TEST dm_mount 00:03:40.489 ************************************ 00:03:40.489 16:02:39 -- setup/devices.sh@1 -- # cleanup 00:03:40.489 16:02:39 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:40.489 16:02:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.489 16:02:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.489 16:02:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:40.489 16:02:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.489 16:02:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.751 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:40.751 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:40.751 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.751 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.751 16:02:39 -- setup/devices.sh@12 -- # cleanup_dm 00:03:40.751 16:02:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:40.751 16:02:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.751 16:02:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.751 16:02:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.751 16:02:39 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.751 16:02:39 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:40.751 00:03:40.751 real 0m23.743s 00:03:40.751 user 0m5.823s 00:03:40.751 sys 0m11.293s 00:03:40.751 16:02:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.751 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.751 ************************************ 00:03:40.751 END TEST devices 00:03:40.751 ************************************ 00:03:40.751 00:03:40.751 real 1m18.987s 00:03:40.751 user 0m21.474s 00:03:40.751 sys 0m41.225s 00:03:40.751 16:02:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.751 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.751 ************************************ 00:03:40.751 END TEST setup.sh 00:03:40.751 ************************************ 00:03:40.751 16:02:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:43.300 Hugepages 00:03:43.300 node hugesize free / total 00:03:43.300 node0 1048576kB 0 / 0 00:03:43.300 node0 2048kB 2048 / 2048 00:03:43.300 node1 1048576kB 0 / 0 00:03:43.300 node1 2048kB 0 / 0 00:03:43.300 00:03:43.300 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.300 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:03:43.300 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:43.300 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:43.300 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:43.300 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:43.300 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:43.300 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:43.300 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:43.300 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:43.561 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:03:43.561 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:43.561 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:43.561 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:43.561 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:43.561 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:43.561 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:43.561 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:43.561 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:43.561 16:02:42 -- spdk/autotest.sh@141 -- # uname -s 00:03:43.561 16:02:42 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:43.561 16:02:42 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:43.561 16:02:42 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:46.110 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.110 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.110 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.371 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.371 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.371 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.371 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.371 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.371 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.371 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.371 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.633 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.633 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.633 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:46.633 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:46.633 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:47.235 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:47.235 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:47.528 16:02:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:48.489 16:02:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:48.489 16:02:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:48.489 16:02:47 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.489 16:02:47 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:48.489 16:02:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.489 16:02:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.489 16:02:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.489 16:02:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.489 16:02:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.751 16:02:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:48.751 16:02:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:48.751 16:02:47 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.055 Waiting for block devices as requested 00:03:52.055 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:03:52.055 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.055 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.055 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.055 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.055 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.055 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.316 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.316 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.316 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.316 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.578 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.578 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.578 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.578 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.838 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:52.838 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:03:52.838 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:03:53.100 16:02:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:53.100 16:02:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:03:00.0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # grep 0000:03:00.0/nvme/nvme 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x5e' 00:03:53.100 16:02:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:53.100 16:02:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:53.100 16:02:51 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:53.100 16:02:51 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:53.100 16:02:51 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1542 -- # continue 00:03:53.100 16:02:51 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:53.100 16:02:51 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # grep 0000:c9:00.0/nvme/nvme 00:03:53.100 16:02:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:53.100 16:02:51 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:03:53.100 16:02:51 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:53.100 16:02:51 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:53.100 16:02:52 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:53.101 16:02:52 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:53.101 16:02:52 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:53.101 16:02:52 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:53.101 16:02:52 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:53.101 16:02:52 -- common/autotest_common.sh@1542 -- # continue 00:03:53.101 16:02:52 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:53.101 16:02:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.101 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:53.361 16:02:52 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:53.361 16:02:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.361 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:53.361 16:02:52 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:55.905 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:55.905 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:55.905 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:55.905 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:55.905 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:55.905 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:55.905 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:55.905 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:55.905 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:56.166 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:56.166 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:56.166 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:56.166 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:56.166 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:56.166 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:56.166 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:56.739 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:57.001 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:57.260 16:02:56 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:57.261 16:02:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:57.261 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.261 16:02:56 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:57.261 16:02:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:57.261 16:02:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:57.261 16:02:56 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:57.261 16:02:56 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:57.261 16:02:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:57.261 16:02:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:57.261 16:02:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:57.261 16:02:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.261 16:02:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:57.261 16:02:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:57.261 16:02:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:57.261 16:02:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:57.261 16:02:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:57.261 16:02:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:03:00.0/device 00:03:57.261 16:02:56 -- common/autotest_common.sh@1565 -- # device=0x51c3 00:03:57.261 16:02:56 -- common/autotest_common.sh@1566 -- # [[ 0x51c3 == \0\x\0\a\5\4 ]] 00:03:57.261 16:02:56 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:57.261 16:02:56 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:03:57.261 16:02:56 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:03:57.261 16:02:56 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:57.261 16:02:56 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:03:57.261 16:02:56 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:57.261 16:02:56 -- common/autotest_common.sh@1578 -- # return 0 00:03:57.261 16:02:56 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:57.261 16:02:56 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:57.261 16:02:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:57.261 16:02:56 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:57.261 16:02:56 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:57.261 16:02:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:57.261 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.522 16:02:56 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:57.522 16:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.522 16:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.522 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.522 ************************************ 00:03:57.522 START TEST env 00:03:57.522 ************************************ 00:03:57.522 16:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:57.522 * Looking for test storage... 00:03:57.522 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:03:57.522 16:02:56 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.522 16:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.522 16:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.522 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.522 ************************************ 00:03:57.522 START TEST env_memory 00:03:57.522 ************************************ 00:03:57.522 16:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.522 00:03:57.522 00:03:57.522 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.522 http://cunit.sourceforge.net/ 00:03:57.522 00:03:57.522 00:03:57.522 Suite: memory 00:03:57.522 Test: alloc and free memory map ...[2024-04-23 16:02:56.331265] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:57.522 passed 00:03:57.522 Test: mem map translation ...[2024-04-23 16:02:56.378408] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:57.522 [2024-04-23 16:02:56.378442] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:57.522 [2024-04-23 16:02:56.378522] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:57.522 [2024-04-23 16:02:56.378546] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:57.522 passed 00:03:57.784 Test: mem map registration ...[2024-04-23 16:02:56.464657] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:57.784 [2024-04-23 16:02:56.464687] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:57.784 passed 00:03:57.784 Test: mem map adjacent registrations ...passed 00:03:57.784 00:03:57.784 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.784 suites 1 1 n/a 0 0 00:03:57.784 tests 4 4 4 0 0 00:03:57.784 asserts 152 152 152 0 n/a 00:03:57.784 00:03:57.784 Elapsed time = 0.293 seconds 00:03:57.784 00:03:57.784 real 0m0.317s 00:03:57.784 user 0m0.292s 00:03:57.784 sys 0m0.024s 00:03:57.784 16:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.784 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.784 ************************************ 00:03:57.784 END TEST env_memory 00:03:57.784 ************************************ 00:03:57.784 16:02:56 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.784 16:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.784 16:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.784 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:03:57.785 ************************************ 00:03:57.785 START TEST env_vtophys 00:03:57.785 ************************************ 00:03:57.785 16:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.785 EAL: lib.eal log level changed from notice to debug 00:03:57.785 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.785 EAL: Detected lcore 1 as core 1 on socket 0 00:03:57.785 EAL: Detected lcore 2 as core 2 on socket 0 00:03:57.785 EAL: Detected lcore 3 as core 3 on socket 0 00:03:57.785 EAL: Detected lcore 4 as core 4 on socket 0 00:03:57.785 EAL: Detected lcore 5 as core 5 on socket 0 00:03:57.785 EAL: Detected lcore 6 as core 6 on socket 0 00:03:57.785 EAL: Detected lcore 7 as core 7 on socket 0 00:03:57.785 EAL: Detected lcore 8 as core 8 on socket 0 00:03:57.785 EAL: Detected lcore 9 as core 9 on socket 0 00:03:57.785 EAL: Detected lcore 10 as core 10 on socket 0 00:03:57.785 EAL: Detected lcore 11 as core 11 on socket 0 00:03:57.785 EAL: Detected lcore 12 as core 12 on socket 0 00:03:57.785 EAL: Detected lcore 13 as core 13 on socket 0 00:03:57.785 EAL: Detected lcore 14 as core 14 on socket 0 00:03:57.785 EAL: Detected lcore 15 as core 15 on socket 0 00:03:57.785 EAL: Detected lcore 16 as core 16 on socket 0 00:03:57.785 EAL: Detected lcore 17 as core 17 on socket 0 00:03:57.785 EAL: Detected lcore 18 as core 18 on socket 0 00:03:57.785 EAL: Detected lcore 19 as core 19 on socket 0 00:03:57.785 EAL: Detected lcore 20 as core 20 on socket 0 00:03:57.785 EAL: Detected lcore 21 as core 21 on socket 0 00:03:57.785 EAL: Detected lcore 22 as core 22 on socket 0 00:03:57.785 EAL: Detected lcore 23 as core 23 on socket 0 00:03:57.785 EAL: Detected lcore 24 as core 24 on socket 0 00:03:57.785 EAL: Detected lcore 25 as core 25 on socket 0 00:03:57.785 EAL: Detected lcore 26 as core 26 on socket 0 00:03:57.785 EAL: Detected lcore 27 as core 27 on socket 0 00:03:57.785 EAL: Detected lcore 28 as core 28 on socket 0 00:03:57.785 EAL: Detected lcore 29 as core 29 on socket 0 00:03:57.785 EAL: Detected lcore 30 as core 30 on socket 0 00:03:57.785 EAL: Detected lcore 31 as core 31 on socket 0 00:03:57.785 EAL: Detected lcore 32 as core 0 on socket 1 00:03:57.785 EAL: Detected lcore 33 as core 1 on socket 1 00:03:57.785 EAL: Detected lcore 34 as core 2 on socket 1 00:03:57.785 EAL: Detected lcore 35 as core 3 on socket 1 00:03:57.785 EAL: Detected lcore 36 as core 4 on socket 1 00:03:57.785 EAL: Detected lcore 37 as core 5 on socket 1 00:03:57.785 EAL: Detected lcore 38 as core 6 on socket 1 00:03:57.785 EAL: Detected lcore 39 as core 7 on socket 1 00:03:57.785 EAL: Detected lcore 40 as core 8 on socket 1 00:03:57.785 EAL: Detected lcore 41 as core 9 on socket 1 00:03:57.785 EAL: Detected lcore 42 as core 10 on socket 1 00:03:57.785 EAL: Detected lcore 43 as core 11 on socket 1 00:03:57.785 EAL: Detected lcore 44 as core 12 on socket 1 00:03:57.785 EAL: Detected lcore 45 as core 13 on socket 1 00:03:57.785 EAL: Detected lcore 46 as core 14 on socket 1 00:03:57.785 EAL: Detected lcore 47 as core 15 on socket 1 00:03:57.785 EAL: Detected lcore 48 as core 16 on socket 1 00:03:57.785 EAL: Detected lcore 49 as core 17 on socket 1 00:03:57.785 EAL: Detected lcore 50 as core 18 on socket 1 00:03:57.785 EAL: Detected lcore 51 as core 19 on socket 1 00:03:57.785 EAL: Detected lcore 52 as core 20 on socket 1 00:03:57.785 EAL: Detected lcore 53 as core 21 on socket 1 00:03:57.785 EAL: Detected lcore 54 as core 22 on socket 1 00:03:57.785 EAL: Detected lcore 55 as core 23 on socket 1 00:03:57.785 EAL: Detected lcore 56 as core 24 on socket 1 00:03:57.785 EAL: Detected lcore 57 as core 25 on socket 1 00:03:57.785 EAL: Detected lcore 58 as core 26 on socket 1 00:03:57.785 EAL: Detected lcore 59 as core 27 on socket 1 00:03:57.785 EAL: Detected lcore 60 as core 28 on socket 1 00:03:57.785 EAL: Detected lcore 61 as core 29 on socket 1 00:03:57.785 EAL: Detected lcore 62 as core 30 on socket 1 00:03:57.785 EAL: Detected lcore 63 as core 31 on socket 1 00:03:57.785 EAL: Detected lcore 64 as core 0 on socket 0 00:03:57.785 EAL: Detected lcore 65 as core 1 on socket 0 00:03:57.785 EAL: Detected lcore 66 as core 2 on socket 0 00:03:57.785 EAL: Detected lcore 67 as core 3 on socket 0 00:03:57.785 EAL: Detected lcore 68 as core 4 on socket 0 00:03:57.785 EAL: Detected lcore 69 as core 5 on socket 0 00:03:57.785 EAL: Detected lcore 70 as core 6 on socket 0 00:03:57.785 EAL: Detected lcore 71 as core 7 on socket 0 00:03:57.785 EAL: Detected lcore 72 as core 8 on socket 0 00:03:57.785 EAL: Detected lcore 73 as core 9 on socket 0 00:03:57.785 EAL: Detected lcore 74 as core 10 on socket 0 00:03:57.785 EAL: Detected lcore 75 as core 11 on socket 0 00:03:57.785 EAL: Detected lcore 76 as core 12 on socket 0 00:03:57.785 EAL: Detected lcore 77 as core 13 on socket 0 00:03:57.785 EAL: Detected lcore 78 as core 14 on socket 0 00:03:57.785 EAL: Detected lcore 79 as core 15 on socket 0 00:03:57.785 EAL: Detected lcore 80 as core 16 on socket 0 00:03:57.785 EAL: Detected lcore 81 as core 17 on socket 0 00:03:57.785 EAL: Detected lcore 82 as core 18 on socket 0 00:03:57.785 EAL: Detected lcore 83 as core 19 on socket 0 00:03:57.785 EAL: Detected lcore 84 as core 20 on socket 0 00:03:57.785 EAL: Detected lcore 85 as core 21 on socket 0 00:03:57.785 EAL: Detected lcore 86 as core 22 on socket 0 00:03:57.785 EAL: Detected lcore 87 as core 23 on socket 0 00:03:57.785 EAL: Detected lcore 88 as core 24 on socket 0 00:03:57.785 EAL: Detected lcore 89 as core 25 on socket 0 00:03:57.785 EAL: Detected lcore 90 as core 26 on socket 0 00:03:57.785 EAL: Detected lcore 91 as core 27 on socket 0 00:03:57.785 EAL: Detected lcore 92 as core 28 on socket 0 00:03:57.785 EAL: Detected lcore 93 as core 29 on socket 0 00:03:57.785 EAL: Detected lcore 94 as core 30 on socket 0 00:03:57.785 EAL: Detected lcore 95 as core 31 on socket 0 00:03:57.785 EAL: Detected lcore 96 as core 0 on socket 1 00:03:57.785 EAL: Detected lcore 97 as core 1 on socket 1 00:03:57.785 EAL: Detected lcore 98 as core 2 on socket 1 00:03:57.785 EAL: Detected lcore 99 as core 3 on socket 1 00:03:57.785 EAL: Detected lcore 100 as core 4 on socket 1 00:03:57.785 EAL: Detected lcore 101 as core 5 on socket 1 00:03:57.785 EAL: Detected lcore 102 as core 6 on socket 1 00:03:57.785 EAL: Detected lcore 103 as core 7 on socket 1 00:03:57.785 EAL: Detected lcore 104 as core 8 on socket 1 00:03:57.785 EAL: Detected lcore 105 as core 9 on socket 1 00:03:57.785 EAL: Detected lcore 106 as core 10 on socket 1 00:03:57.785 EAL: Detected lcore 107 as core 11 on socket 1 00:03:57.785 EAL: Detected lcore 108 as core 12 on socket 1 00:03:57.785 EAL: Detected lcore 109 as core 13 on socket 1 00:03:57.785 EAL: Detected lcore 110 as core 14 on socket 1 00:03:57.785 EAL: Detected lcore 111 as core 15 on socket 1 00:03:57.785 EAL: Detected lcore 112 as core 16 on socket 1 00:03:57.785 EAL: Detected lcore 113 as core 17 on socket 1 00:03:57.785 EAL: Detected lcore 114 as core 18 on socket 1 00:03:57.785 EAL: Detected lcore 115 as core 19 on socket 1 00:03:57.785 EAL: Detected lcore 116 as core 20 on socket 1 00:03:57.785 EAL: Detected lcore 117 as core 21 on socket 1 00:03:57.785 EAL: Detected lcore 118 as core 22 on socket 1 00:03:57.785 EAL: Detected lcore 119 as core 23 on socket 1 00:03:57.785 EAL: Detected lcore 120 as core 24 on socket 1 00:03:57.785 EAL: Detected lcore 121 as core 25 on socket 1 00:03:57.785 EAL: Detected lcore 122 as core 26 on socket 1 00:03:57.785 EAL: Detected lcore 123 as core 27 on socket 1 00:03:57.785 EAL: Detected lcore 124 as core 28 on socket 1 00:03:57.785 EAL: Detected lcore 125 as core 29 on socket 1 00:03:57.785 EAL: Detected lcore 126 as core 30 on socket 1 00:03:57.785 EAL: Detected lcore 127 as core 31 on socket 1 00:03:57.785 EAL: Maximum logical cores by configuration: 128 00:03:57.785 EAL: Detected CPU lcores: 128 00:03:57.785 EAL: Detected NUMA nodes: 2 00:03:57.785 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:57.785 EAL: Detected shared linkage of DPDK 00:03:57.785 EAL: No shared files mode enabled, IPC will be disabled 00:03:58.047 EAL: Bus pci wants IOVA as 'DC' 00:03:58.048 EAL: Buses did not request a specific IOVA mode. 00:03:58.048 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:58.048 EAL: Selected IOVA mode 'VA' 00:03:58.048 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.048 EAL: Probing VFIO support... 00:03:58.048 EAL: IOMMU type 1 (Type 1) is supported 00:03:58.048 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:58.048 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:58.048 EAL: VFIO support initialized 00:03:58.048 EAL: Ask a virtual area of 0x2e000 bytes 00:03:58.048 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:58.048 EAL: Setting up physically contiguous memory... 00:03:58.048 EAL: Setting maximum number of open files to 524288 00:03:58.048 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:58.048 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:58.048 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:58.048 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:58.048 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.048 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:58.048 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.048 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.048 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:58.048 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:58.048 EAL: Hugepages will be freed exactly as allocated. 00:03:58.048 EAL: No shared files mode enabled, IPC is disabled 00:03:58.048 EAL: No shared files mode enabled, IPC is disabled 00:03:58.048 EAL: TSC frequency is ~1900000 KHz 00:03:58.048 EAL: Main lcore 0 is ready (tid=7fb44a80da40;cpuset=[0]) 00:03:58.048 EAL: Trying to obtain current memory policy. 00:03:58.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.048 EAL: Restoring previous memory policy: 0 00:03:58.048 EAL: request: mp_malloc_sync 00:03:58.048 EAL: No shared files mode enabled, IPC is disabled 00:03:58.048 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.048 EAL: No shared files mode enabled, IPC is disabled 00:03:58.048 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.048 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.048 00:03:58.048 00:03:58.048 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.048 http://cunit.sourceforge.net/ 00:03:58.048 00:03:58.048 00:03:58.048 Suite: components_suite 00:03:58.310 Test: vtophys_malloc_test ...passed 00:03:58.310 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.310 EAL: Trying to obtain current memory policy. 00:03:58.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.310 EAL: Restoring previous memory policy: 4 00:03:58.310 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.310 EAL: request: mp_malloc_sync 00:03:58.310 EAL: No shared files mode enabled, IPC is disabled 00:03:58.310 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.571 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.571 EAL: request: mp_malloc_sync 00:03:58.571 EAL: No shared files mode enabled, IPC is disabled 00:03:58.571 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.571 EAL: Trying to obtain current memory policy. 00:03:58.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.571 EAL: Restoring previous memory policy: 4 00:03:58.571 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.571 EAL: request: mp_malloc_sync 00:03:58.571 EAL: No shared files mode enabled, IPC is disabled 00:03:58.571 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.833 EAL: request: mp_malloc_sync 00:03:58.833 EAL: No shared files mode enabled, IPC is disabled 00:03:58.833 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.833 EAL: Trying to obtain current memory policy. 00:03:58.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.833 EAL: Restoring previous memory policy: 4 00:03:58.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.833 EAL: request: mp_malloc_sync 00:03:58.833 EAL: No shared files mode enabled, IPC is disabled 00:03:58.833 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.407 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.407 EAL: request: mp_malloc_sync 00:03:59.407 EAL: No shared files mode enabled, IPC is disabled 00:03:59.407 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.668 EAL: Trying to obtain current memory policy. 00:03:59.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.668 EAL: Restoring previous memory policy: 4 00:03:59.668 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.668 EAL: request: mp_malloc_sync 00:03:59.668 EAL: No shared files mode enabled, IPC is disabled 00:03:59.668 EAL: Heap on socket 0 was expanded by 1026MB 00:04:00.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.501 EAL: request: mp_malloc_sync 00:04:00.501 EAL: No shared files mode enabled, IPC is disabled 00:04:00.501 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.074 passed 00:04:01.074 00:04:01.074 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.074 suites 1 1 n/a 0 0 00:04:01.074 tests 2 2 2 0 0 00:04:01.074 asserts 497 497 497 0 n/a 00:04:01.074 00:04:01.074 Elapsed time = 2.908 seconds 00:04:01.074 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.074 EAL: request: mp_malloc_sync 00:04:01.074 EAL: No shared files mode enabled, IPC is disabled 00:04:01.074 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.074 EAL: No shared files mode enabled, IPC is disabled 00:04:01.074 EAL: No shared files mode enabled, IPC is disabled 00:04:01.074 EAL: No shared files mode enabled, IPC is disabled 00:04:01.074 00:04:01.074 real 0m3.134s 00:04:01.074 user 0m2.443s 00:04:01.074 sys 0m0.646s 00:04:01.074 16:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.074 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.074 ************************************ 00:04:01.074 END TEST env_vtophys 00:04:01.074 ************************************ 00:04:01.074 16:02:59 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.074 16:02:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.074 16:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.074 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.074 ************************************ 00:04:01.074 START TEST env_pci 00:04:01.074 ************************************ 00:04:01.074 16:02:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.074 00:04:01.074 00:04:01.074 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.074 http://cunit.sourceforge.net/ 00:04:01.074 00:04:01.074 00:04:01.074 Suite: pci 00:04:01.074 Test: pci_hook ...[2024-04-23 16:02:59.821674] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2864131 has claimed it 00:04:01.074 EAL: Cannot find device (10000:00:01.0) 00:04:01.074 EAL: Failed to attach device on primary process 00:04:01.074 passed 00:04:01.074 00:04:01.074 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.074 suites 1 1 n/a 0 0 00:04:01.074 tests 1 1 1 0 0 00:04:01.074 asserts 25 25 25 0 n/a 00:04:01.074 00:04:01.074 Elapsed time = 0.052 seconds 00:04:01.074 00:04:01.074 real 0m0.102s 00:04:01.074 user 0m0.036s 00:04:01.074 sys 0m0.065s 00:04:01.074 16:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.074 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.074 ************************************ 00:04:01.074 END TEST env_pci 00:04:01.074 ************************************ 00:04:01.074 16:02:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.074 16:02:59 -- env/env.sh@15 -- # uname 00:04:01.074 16:02:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.074 16:02:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.074 16:02:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.074 16:02:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:01.074 16:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.074 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:01.074 ************************************ 00:04:01.074 START TEST env_dpdk_post_init 00:04:01.074 ************************************ 00:04:01.074 16:02:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.074 EAL: Detected CPU lcores: 128 00:04:01.074 EAL: Detected NUMA nodes: 2 00:04:01.074 EAL: Detected shared linkage of DPDK 00:04:01.074 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.335 EAL: Selected IOVA mode 'VA' 00:04:01.335 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.335 EAL: VFIO support initialized 00:04:01.335 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.335 EAL: Using IOMMU type 1 (Type 1) 00:04:01.596 EAL: Probe PCI driver: spdk_nvme (1344:51c3) device: 0000:03:00.0 (socket 0) 00:04:01.596 EAL: Ignore mapping IO port bar(1) 00:04:01.596 EAL: Ignore mapping IO port bar(3) 00:04:01.856 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:01.856 EAL: Ignore mapping IO port bar(1) 00:04:01.856 EAL: Ignore mapping IO port bar(3) 00:04:02.116 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:02.116 EAL: Ignore mapping IO port bar(1) 00:04:02.116 EAL: Ignore mapping IO port bar(3) 00:04:02.116 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:02.377 EAL: Ignore mapping IO port bar(1) 00:04:02.377 EAL: Ignore mapping IO port bar(3) 00:04:02.377 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:02.639 EAL: Ignore mapping IO port bar(1) 00:04:02.639 EAL: Ignore mapping IO port bar(3) 00:04:02.639 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:02.901 EAL: Ignore mapping IO port bar(1) 00:04:02.901 EAL: Ignore mapping IO port bar(3) 00:04:02.901 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:02.901 EAL: Ignore mapping IO port bar(1) 00:04:02.901 EAL: Ignore mapping IO port bar(3) 00:04:03.163 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:03.163 EAL: Ignore mapping IO port bar(1) 00:04:03.163 EAL: Ignore mapping IO port bar(3) 00:04:03.425 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:03.687 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:c9:00.0 (socket 1) 00:04:03.687 EAL: Ignore mapping IO port bar(1) 00:04:03.687 EAL: Ignore mapping IO port bar(3) 00:04:03.687 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:03.948 EAL: Ignore mapping IO port bar(1) 00:04:03.948 EAL: Ignore mapping IO port bar(3) 00:04:03.948 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:04.209 EAL: Ignore mapping IO port bar(1) 00:04:04.209 EAL: Ignore mapping IO port bar(3) 00:04:04.209 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:04.470 EAL: Ignore mapping IO port bar(1) 00:04:04.470 EAL: Ignore mapping IO port bar(3) 00:04:04.470 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:04.730 EAL: Ignore mapping IO port bar(1) 00:04:04.730 EAL: Ignore mapping IO port bar(3) 00:04:04.730 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:04.730 EAL: Ignore mapping IO port bar(1) 00:04:04.730 EAL: Ignore mapping IO port bar(3) 00:04:04.992 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:04.992 EAL: Ignore mapping IO port bar(1) 00:04:04.992 EAL: Ignore mapping IO port bar(3) 00:04:05.251 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:05.251 EAL: Ignore mapping IO port bar(1) 00:04:05.251 EAL: Ignore mapping IO port bar(3) 00:04:05.251 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:06.195 EAL: Releasing PCI mapped resource for 0000:03:00.0 00:04:06.195 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x202001000000 00:04:06.195 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:06.195 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x2020011c0000 00:04:06.457 Starting DPDK initialization... 00:04:06.457 Starting SPDK post initialization... 00:04:06.457 SPDK NVMe probe 00:04:06.457 Attaching to 0000:03:00.0 00:04:06.457 Attaching to 0000:c9:00.0 00:04:06.457 Attached to 0000:c9:00.0 00:04:06.457 Attached to 0000:03:00.0 00:04:06.457 Cleaning up... 00:04:08.374 00:04:08.374 real 0m6.986s 00:04:08.374 user 0m1.131s 00:04:08.374 sys 0m0.157s 00:04:08.374 16:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.374 16:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.374 ************************************ 00:04:08.374 END TEST env_dpdk_post_init 00:04:08.374 ************************************ 00:04:08.374 16:03:06 -- env/env.sh@26 -- # uname 00:04:08.374 16:03:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.374 16:03:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.374 16:03:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.374 16:03:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.374 16:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:08.374 ************************************ 00:04:08.374 START TEST env_mem_callbacks 00:04:08.374 ************************************ 00:04:08.374 16:03:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.374 EAL: Detected CPU lcores: 128 00:04:08.374 EAL: Detected NUMA nodes: 2 00:04:08.374 EAL: Detected shared linkage of DPDK 00:04:08.374 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.374 EAL: Selected IOVA mode 'VA' 00:04:08.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.374 EAL: VFIO support initialized 00:04:08.374 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.374 00:04:08.374 00:04:08.374 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.374 http://cunit.sourceforge.net/ 00:04:08.374 00:04:08.374 00:04:08.374 Suite: memory 00:04:08.374 Test: test ... 00:04:08.374 register 0x200000200000 2097152 00:04:08.374 malloc 3145728 00:04:08.374 register 0x200000400000 4194304 00:04:08.374 buf 0x2000004fffc0 len 3145728 PASSED 00:04:08.374 malloc 64 00:04:08.374 buf 0x2000004ffec0 len 64 PASSED 00:04:08.374 malloc 4194304 00:04:08.374 register 0x200000800000 6291456 00:04:08.374 buf 0x2000009fffc0 len 4194304 PASSED 00:04:08.374 free 0x2000004fffc0 3145728 00:04:08.374 free 0x2000004ffec0 64 00:04:08.374 unregister 0x200000400000 4194304 PASSED 00:04:08.374 free 0x2000009fffc0 4194304 00:04:08.374 unregister 0x200000800000 6291456 PASSED 00:04:08.374 malloc 8388608 00:04:08.374 register 0x200000400000 10485760 00:04:08.374 buf 0x2000005fffc0 len 8388608 PASSED 00:04:08.374 free 0x2000005fffc0 8388608 00:04:08.374 unregister 0x200000400000 10485760 PASSED 00:04:08.374 passed 00:04:08.374 00:04:08.374 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.374 suites 1 1 n/a 0 0 00:04:08.374 tests 1 1 1 0 0 00:04:08.374 asserts 15 15 15 0 n/a 00:04:08.374 00:04:08.374 Elapsed time = 0.022 seconds 00:04:08.374 00:04:08.374 real 0m0.136s 00:04:08.374 user 0m0.059s 00:04:08.374 sys 0m0.075s 00:04:08.374 16:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.374 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:08.374 ************************************ 00:04:08.374 END TEST env_mem_callbacks 00:04:08.374 ************************************ 00:04:08.374 00:04:08.374 real 0m10.926s 00:04:08.374 user 0m4.055s 00:04:08.374 sys 0m1.165s 00:04:08.374 16:03:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.374 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:08.374 ************************************ 00:04:08.374 END TEST env 00:04:08.374 ************************************ 00:04:08.374 16:03:07 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.374 16:03:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.374 16:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.374 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:08.374 ************************************ 00:04:08.374 START TEST rpc 00:04:08.374 ************************************ 00:04:08.374 16:03:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.374 * Looking for test storage... 00:04:08.374 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:08.374 16:03:07 -- rpc/rpc.sh@65 -- # spdk_pid=2866088 00:04:08.374 16:03:07 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.375 16:03:07 -- rpc/rpc.sh@67 -- # waitforlisten 2866088 00:04:08.375 16:03:07 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:08.375 16:03:07 -- common/autotest_common.sh@819 -- # '[' -z 2866088 ']' 00:04:08.375 16:03:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.375 16:03:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:08.375 16:03:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.375 16:03:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:08.375 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:08.375 [2024-04-23 16:03:07.279111] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:08.375 [2024-04-23 16:03:07.279205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866088 ] 00:04:08.636 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.636 [2024-04-23 16:03:07.370093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.636 [2024-04-23 16:03:07.467946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:08.636 [2024-04-23 16:03:07.468124] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.636 [2024-04-23 16:03:07.468137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2866088' to capture a snapshot of events at runtime. 00:04:08.636 [2024-04-23 16:03:07.468147] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2866088 for offline analysis/debug. 00:04:08.636 [2024-04-23 16:03:07.468172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.206 16:03:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:09.206 16:03:08 -- common/autotest_common.sh@852 -- # return 0 00:04:09.206 16:03:08 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:09.206 16:03:08 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:09.206 16:03:08 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:09.206 16:03:08 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:09.206 16:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.206 16:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.206 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.206 ************************************ 00:04:09.206 START TEST rpc_integrity 00:04:09.206 ************************************ 00:04:09.206 16:03:08 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:09.206 16:03:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.206 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.206 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.206 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.206 16:03:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.206 16:03:08 -- rpc/rpc.sh@13 -- # jq length 00:04:09.469 16:03:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.469 16:03:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.469 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.469 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.469 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.469 16:03:08 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.469 16:03:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.469 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.469 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.469 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.469 16:03:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.469 { 00:04:09.469 "name": "Malloc0", 00:04:09.469 "aliases": [ 00:04:09.469 "751d9296-a824-4f88-aa17-62c898156840" 00:04:09.469 ], 00:04:09.469 "product_name": "Malloc disk", 00:04:09.469 "block_size": 512, 00:04:09.469 "num_blocks": 16384, 00:04:09.469 "uuid": "751d9296-a824-4f88-aa17-62c898156840", 00:04:09.469 "assigned_rate_limits": { 00:04:09.469 "rw_ios_per_sec": 0, 00:04:09.469 "rw_mbytes_per_sec": 0, 00:04:09.469 "r_mbytes_per_sec": 0, 00:04:09.469 "w_mbytes_per_sec": 0 00:04:09.469 }, 00:04:09.469 "claimed": false, 00:04:09.469 "zoned": false, 00:04:09.469 "supported_io_types": { 00:04:09.469 "read": true, 00:04:09.469 "write": true, 00:04:09.469 "unmap": true, 00:04:09.469 "write_zeroes": true, 00:04:09.469 "flush": true, 00:04:09.469 "reset": true, 00:04:09.469 "compare": false, 00:04:09.469 "compare_and_write": false, 00:04:09.469 "abort": true, 00:04:09.469 "nvme_admin": false, 00:04:09.469 "nvme_io": false 00:04:09.469 }, 00:04:09.469 "memory_domains": [ 00:04:09.469 { 00:04:09.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.469 "dma_device_type": 2 00:04:09.469 } 00:04:09.469 ], 00:04:09.469 "driver_specific": {} 00:04:09.469 } 00:04:09.469 ]' 00:04:09.469 16:03:08 -- rpc/rpc.sh@17 -- # jq length 00:04:09.469 16:03:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.469 16:03:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.469 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.469 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.469 [2024-04-23 16:03:08.209684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.469 [2024-04-23 16:03:08.209735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.469 [2024-04-23 16:03:08.209761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:04:09.469 [2024-04-23 16:03:08.209772] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.469 [2024-04-23 16:03:08.211675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.469 [2024-04-23 16:03:08.211703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.469 Passthru0 00:04:09.469 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.469 16:03:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.469 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.469 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.469 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.469 16:03:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.469 { 00:04:09.469 "name": "Malloc0", 00:04:09.469 "aliases": [ 00:04:09.469 "751d9296-a824-4f88-aa17-62c898156840" 00:04:09.469 ], 00:04:09.469 "product_name": "Malloc disk", 00:04:09.469 "block_size": 512, 00:04:09.469 "num_blocks": 16384, 00:04:09.469 "uuid": "751d9296-a824-4f88-aa17-62c898156840", 00:04:09.469 "assigned_rate_limits": { 00:04:09.469 "rw_ios_per_sec": 0, 00:04:09.469 "rw_mbytes_per_sec": 0, 00:04:09.469 "r_mbytes_per_sec": 0, 00:04:09.469 "w_mbytes_per_sec": 0 00:04:09.469 }, 00:04:09.469 "claimed": true, 00:04:09.469 "claim_type": "exclusive_write", 00:04:09.469 "zoned": false, 00:04:09.469 "supported_io_types": { 00:04:09.469 "read": true, 00:04:09.469 "write": true, 00:04:09.469 "unmap": true, 00:04:09.469 "write_zeroes": true, 00:04:09.469 "flush": true, 00:04:09.469 "reset": true, 00:04:09.469 "compare": false, 00:04:09.469 "compare_and_write": false, 00:04:09.469 "abort": true, 00:04:09.469 "nvme_admin": false, 00:04:09.469 "nvme_io": false 00:04:09.469 }, 00:04:09.469 "memory_domains": [ 00:04:09.469 { 00:04:09.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.469 "dma_device_type": 2 00:04:09.469 } 00:04:09.469 ], 00:04:09.469 "driver_specific": {} 00:04:09.469 }, 00:04:09.469 { 00:04:09.469 "name": "Passthru0", 00:04:09.469 "aliases": [ 00:04:09.469 "1d32c77f-3ff5-58fe-8bf3-31712df2d7e4" 00:04:09.469 ], 00:04:09.469 "product_name": "passthru", 00:04:09.469 "block_size": 512, 00:04:09.469 "num_blocks": 16384, 00:04:09.469 "uuid": "1d32c77f-3ff5-58fe-8bf3-31712df2d7e4", 00:04:09.469 "assigned_rate_limits": { 00:04:09.469 "rw_ios_per_sec": 0, 00:04:09.469 "rw_mbytes_per_sec": 0, 00:04:09.469 "r_mbytes_per_sec": 0, 00:04:09.469 "w_mbytes_per_sec": 0 00:04:09.469 }, 00:04:09.469 "claimed": false, 00:04:09.469 "zoned": false, 00:04:09.469 "supported_io_types": { 00:04:09.469 "read": true, 00:04:09.469 "write": true, 00:04:09.469 "unmap": true, 00:04:09.469 "write_zeroes": true, 00:04:09.469 "flush": true, 00:04:09.469 "reset": true, 00:04:09.469 "compare": false, 00:04:09.469 "compare_and_write": false, 00:04:09.469 "abort": true, 00:04:09.469 "nvme_admin": false, 00:04:09.469 "nvme_io": false 00:04:09.469 }, 00:04:09.469 "memory_domains": [ 00:04:09.469 { 00:04:09.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.469 "dma_device_type": 2 00:04:09.469 } 00:04:09.469 ], 00:04:09.469 "driver_specific": { 00:04:09.470 "passthru": { 00:04:09.470 "name": "Passthru0", 00:04:09.470 "base_bdev_name": "Malloc0" 00:04:09.470 } 00:04:09.470 } 00:04:09.470 } 00:04:09.470 ]' 00:04:09.470 16:03:08 -- rpc/rpc.sh@21 -- # jq length 00:04:09.470 16:03:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.470 16:03:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.470 16:03:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.470 16:03:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.470 16:03:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.470 16:03:08 -- rpc/rpc.sh@26 -- # jq length 00:04:09.470 16:03:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.470 00:04:09.470 real 0m0.233s 00:04:09.470 user 0m0.125s 00:04:09.470 sys 0m0.035s 00:04:09.470 16:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 ************************************ 00:04:09.470 END TEST rpc_integrity 00:04:09.470 ************************************ 00:04:09.470 16:03:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.470 16:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.470 16:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 ************************************ 00:04:09.470 START TEST rpc_plugins 00:04:09.470 ************************************ 00:04:09.470 16:03:08 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:09.470 16:03:08 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.470 16:03:08 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.470 16:03:08 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.470 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.470 16:03:08 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.470 { 00:04:09.470 "name": "Malloc1", 00:04:09.470 "aliases": [ 00:04:09.470 "d4817cef-1e56-4eb1-aa74-5d19aec9f9a8" 00:04:09.470 ], 00:04:09.470 "product_name": "Malloc disk", 00:04:09.470 "block_size": 4096, 00:04:09.470 "num_blocks": 256, 00:04:09.470 "uuid": "d4817cef-1e56-4eb1-aa74-5d19aec9f9a8", 00:04:09.470 "assigned_rate_limits": { 00:04:09.470 "rw_ios_per_sec": 0, 00:04:09.470 "rw_mbytes_per_sec": 0, 00:04:09.470 "r_mbytes_per_sec": 0, 00:04:09.470 "w_mbytes_per_sec": 0 00:04:09.470 }, 00:04:09.470 "claimed": false, 00:04:09.470 "zoned": false, 00:04:09.470 "supported_io_types": { 00:04:09.470 "read": true, 00:04:09.470 "write": true, 00:04:09.470 "unmap": true, 00:04:09.470 "write_zeroes": true, 00:04:09.470 "flush": true, 00:04:09.470 "reset": true, 00:04:09.470 "compare": false, 00:04:09.470 "compare_and_write": false, 00:04:09.470 "abort": true, 00:04:09.470 "nvme_admin": false, 00:04:09.470 "nvme_io": false 00:04:09.470 }, 00:04:09.470 "memory_domains": [ 00:04:09.470 { 00:04:09.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.470 "dma_device_type": 2 00:04:09.470 } 00:04:09.470 ], 00:04:09.470 "driver_specific": {} 00:04:09.470 } 00:04:09.470 ]' 00:04:09.470 16:03:08 -- rpc/rpc.sh@32 -- # jq length 00:04:09.470 16:03:08 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.470 16:03:08 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.470 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.470 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.732 16:03:08 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.732 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.732 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.732 16:03:08 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.732 16:03:08 -- rpc/rpc.sh@36 -- # jq length 00:04:09.732 16:03:08 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.732 00:04:09.732 real 0m0.093s 00:04:09.732 user 0m0.049s 00:04:09.732 sys 0m0.016s 00:04:09.732 16:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.732 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 ************************************ 00:04:09.732 END TEST rpc_plugins 00:04:09.732 ************************************ 00:04:09.732 16:03:08 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.732 16:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.732 16:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.732 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 ************************************ 00:04:09.732 START TEST rpc_trace_cmd_test 00:04:09.732 ************************************ 00:04:09.732 16:03:08 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:09.732 16:03:08 -- rpc/rpc.sh@40 -- # local info 00:04:09.732 16:03:08 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.732 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.732 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.732 16:03:08 -- rpc/rpc.sh@42 -- # info='{ 00:04:09.732 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2866088", 00:04:09.732 "tpoint_group_mask": "0x8", 00:04:09.732 "iscsi_conn": { 00:04:09.732 "mask": "0x2", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "scsi": { 00:04:09.732 "mask": "0x4", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "bdev": { 00:04:09.732 "mask": "0x8", 00:04:09.732 "tpoint_mask": "0xffffffffffffffff" 00:04:09.732 }, 00:04:09.732 "nvmf_rdma": { 00:04:09.732 "mask": "0x10", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "nvmf_tcp": { 00:04:09.732 "mask": "0x20", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "ftl": { 00:04:09.732 "mask": "0x40", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "blobfs": { 00:04:09.732 "mask": "0x80", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "dsa": { 00:04:09.732 "mask": "0x200", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "thread": { 00:04:09.732 "mask": "0x400", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "nvme_pcie": { 00:04:09.732 "mask": "0x800", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "iaa": { 00:04:09.732 "mask": "0x1000", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "nvme_tcp": { 00:04:09.732 "mask": "0x2000", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 }, 00:04:09.732 "bdev_nvme": { 00:04:09.732 "mask": "0x4000", 00:04:09.732 "tpoint_mask": "0x0" 00:04:09.732 } 00:04:09.732 }' 00:04:09.732 16:03:08 -- rpc/rpc.sh@43 -- # jq length 00:04:09.732 16:03:08 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:09.732 16:03:08 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.732 16:03:08 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.732 16:03:08 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.732 16:03:08 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.732 16:03:08 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.732 16:03:08 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.732 16:03:08 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.732 16:03:08 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.732 00:04:09.732 real 0m0.164s 00:04:09.732 user 0m0.133s 00:04:09.732 sys 0m0.024s 00:04:09.732 16:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.732 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.732 ************************************ 00:04:09.732 END TEST rpc_trace_cmd_test 00:04:09.732 ************************************ 00:04:09.995 16:03:08 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.995 16:03:08 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.995 16:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.995 16:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 ************************************ 00:04:09.995 START TEST rpc_daemon_integrity 00:04:09.995 ************************************ 00:04:09.995 16:03:08 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:09.995 16:03:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.995 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.995 16:03:08 -- rpc/rpc.sh@13 -- # jq length 00:04:09.995 16:03:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.995 16:03:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.995 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.995 16:03:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.995 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.995 { 00:04:09.995 "name": "Malloc2", 00:04:09.995 "aliases": [ 00:04:09.995 "ed4daff1-6005-49c1-bdef-e0974d6acafb" 00:04:09.995 ], 00:04:09.995 "product_name": "Malloc disk", 00:04:09.995 "block_size": 512, 00:04:09.995 "num_blocks": 16384, 00:04:09.995 "uuid": "ed4daff1-6005-49c1-bdef-e0974d6acafb", 00:04:09.995 "assigned_rate_limits": { 00:04:09.995 "rw_ios_per_sec": 0, 00:04:09.995 "rw_mbytes_per_sec": 0, 00:04:09.995 "r_mbytes_per_sec": 0, 00:04:09.995 "w_mbytes_per_sec": 0 00:04:09.995 }, 00:04:09.995 "claimed": false, 00:04:09.995 "zoned": false, 00:04:09.995 "supported_io_types": { 00:04:09.995 "read": true, 00:04:09.995 "write": true, 00:04:09.995 "unmap": true, 00:04:09.995 "write_zeroes": true, 00:04:09.995 "flush": true, 00:04:09.995 "reset": true, 00:04:09.995 "compare": false, 00:04:09.995 "compare_and_write": false, 00:04:09.995 "abort": true, 00:04:09.995 "nvme_admin": false, 00:04:09.995 "nvme_io": false 00:04:09.995 }, 00:04:09.995 "memory_domains": [ 00:04:09.995 { 00:04:09.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.995 "dma_device_type": 2 00:04:09.995 } 00:04:09.995 ], 00:04:09.995 "driver_specific": {} 00:04:09.995 } 00:04:09.995 ]' 00:04:09.995 16:03:08 -- rpc/rpc.sh@17 -- # jq length 00:04:09.995 16:03:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.995 16:03:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.995 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 [2024-04-23 16:03:08.763744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.995 [2024-04-23 16:03:08.763786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.995 [2024-04-23 16:03:08.763808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:04:09.995 [2024-04-23 16:03:08.763817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.995 [2024-04-23 16:03:08.765612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.995 [2024-04-23 16:03:08.765652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.995 Passthru0 00:04:09.995 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.995 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.995 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.995 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.995 16:03:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.995 { 00:04:09.995 "name": "Malloc2", 00:04:09.995 "aliases": [ 00:04:09.995 "ed4daff1-6005-49c1-bdef-e0974d6acafb" 00:04:09.995 ], 00:04:09.995 "product_name": "Malloc disk", 00:04:09.995 "block_size": 512, 00:04:09.995 "num_blocks": 16384, 00:04:09.995 "uuid": "ed4daff1-6005-49c1-bdef-e0974d6acafb", 00:04:09.995 "assigned_rate_limits": { 00:04:09.995 "rw_ios_per_sec": 0, 00:04:09.995 "rw_mbytes_per_sec": 0, 00:04:09.995 "r_mbytes_per_sec": 0, 00:04:09.995 "w_mbytes_per_sec": 0 00:04:09.995 }, 00:04:09.995 "claimed": true, 00:04:09.995 "claim_type": "exclusive_write", 00:04:09.995 "zoned": false, 00:04:09.995 "supported_io_types": { 00:04:09.995 "read": true, 00:04:09.995 "write": true, 00:04:09.995 "unmap": true, 00:04:09.995 "write_zeroes": true, 00:04:09.995 "flush": true, 00:04:09.995 "reset": true, 00:04:09.995 "compare": false, 00:04:09.995 "compare_and_write": false, 00:04:09.996 "abort": true, 00:04:09.996 "nvme_admin": false, 00:04:09.996 "nvme_io": false 00:04:09.996 }, 00:04:09.996 "memory_domains": [ 00:04:09.996 { 00:04:09.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.996 "dma_device_type": 2 00:04:09.996 } 00:04:09.996 ], 00:04:09.996 "driver_specific": {} 00:04:09.996 }, 00:04:09.996 { 00:04:09.996 "name": "Passthru0", 00:04:09.996 "aliases": [ 00:04:09.996 "c19da4fa-c610-50aa-82a8-7c708d767ec7" 00:04:09.996 ], 00:04:09.996 "product_name": "passthru", 00:04:09.996 "block_size": 512, 00:04:09.996 "num_blocks": 16384, 00:04:09.996 "uuid": "c19da4fa-c610-50aa-82a8-7c708d767ec7", 00:04:09.996 "assigned_rate_limits": { 00:04:09.996 "rw_ios_per_sec": 0, 00:04:09.996 "rw_mbytes_per_sec": 0, 00:04:09.996 "r_mbytes_per_sec": 0, 00:04:09.996 "w_mbytes_per_sec": 0 00:04:09.996 }, 00:04:09.996 "claimed": false, 00:04:09.996 "zoned": false, 00:04:09.996 "supported_io_types": { 00:04:09.996 "read": true, 00:04:09.996 "write": true, 00:04:09.996 "unmap": true, 00:04:09.996 "write_zeroes": true, 00:04:09.996 "flush": true, 00:04:09.996 "reset": true, 00:04:09.996 "compare": false, 00:04:09.996 "compare_and_write": false, 00:04:09.996 "abort": true, 00:04:09.996 "nvme_admin": false, 00:04:09.996 "nvme_io": false 00:04:09.996 }, 00:04:09.996 "memory_domains": [ 00:04:09.996 { 00:04:09.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.996 "dma_device_type": 2 00:04:09.996 } 00:04:09.996 ], 00:04:09.996 "driver_specific": { 00:04:09.996 "passthru": { 00:04:09.996 "name": "Passthru0", 00:04:09.996 "base_bdev_name": "Malloc2" 00:04:09.996 } 00:04:09.996 } 00:04:09.996 } 00:04:09.996 ]' 00:04:09.996 16:03:08 -- rpc/rpc.sh@21 -- # jq length 00:04:09.996 16:03:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.996 16:03:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.996 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.996 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.996 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.996 16:03:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.996 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.996 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.996 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.996 16:03:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.996 16:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.996 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.996 16:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.996 16:03:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.996 16:03:08 -- rpc/rpc.sh@26 -- # jq length 00:04:09.996 16:03:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.996 00:04:09.996 real 0m0.202s 00:04:09.996 user 0m0.104s 00:04:09.996 sys 0m0.032s 00:04:09.996 16:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.996 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:09.996 ************************************ 00:04:09.996 END TEST rpc_daemon_integrity 00:04:09.996 ************************************ 00:04:09.996 16:03:08 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.996 16:03:08 -- rpc/rpc.sh@84 -- # killprocess 2866088 00:04:09.996 16:03:08 -- common/autotest_common.sh@926 -- # '[' -z 2866088 ']' 00:04:09.996 16:03:08 -- common/autotest_common.sh@930 -- # kill -0 2866088 00:04:09.996 16:03:08 -- common/autotest_common.sh@931 -- # uname 00:04:09.996 16:03:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:09.996 16:03:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2866088 00:04:10.259 16:03:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:10.259 16:03:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:10.259 16:03:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2866088' 00:04:10.259 killing process with pid 2866088 00:04:10.259 16:03:08 -- common/autotest_common.sh@945 -- # kill 2866088 00:04:10.259 16:03:08 -- common/autotest_common.sh@950 -- # wait 2866088 00:04:11.204 00:04:11.204 real 0m2.666s 00:04:11.204 user 0m3.060s 00:04:11.204 sys 0m0.641s 00:04:11.204 16:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.204 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 ************************************ 00:04:11.204 END TEST rpc 00:04:11.204 ************************************ 00:04:11.204 16:03:09 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.204 16:03:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.204 16:03:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.204 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 ************************************ 00:04:11.204 START TEST rpc_client 00:04:11.204 ************************************ 00:04:11.204 16:03:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:11.204 * Looking for test storage... 00:04:11.204 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:11.204 16:03:09 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:11.204 OK 00:04:11.204 16:03:09 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:11.204 00:04:11.204 real 0m0.120s 00:04:11.204 user 0m0.044s 00:04:11.204 sys 0m0.081s 00:04:11.204 16:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.204 16:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 ************************************ 00:04:11.204 END TEST rpc_client 00:04:11.204 ************************************ 00:04:11.204 16:03:10 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.204 16:03:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.204 16:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.204 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 ************************************ 00:04:11.204 START TEST json_config 00:04:11.204 ************************************ 00:04:11.204 16:03:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:11.204 16:03:10 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:11.204 16:03:10 -- nvmf/common.sh@7 -- # uname -s 00:04:11.204 16:03:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.204 16:03:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.204 16:03:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.204 16:03:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.204 16:03:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.204 16:03:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.204 16:03:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.204 16:03:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.204 16:03:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.204 16:03:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.204 16:03:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:11.204 16:03:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:11.204 16:03:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.204 16:03:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.204 16:03:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.204 16:03:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:11.204 16:03:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.204 16:03:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.204 16:03:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.204 16:03:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.204 16:03:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.204 16:03:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.204 16:03:10 -- paths/export.sh@5 -- # export PATH 00:04:11.205 16:03:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.205 16:03:10 -- nvmf/common.sh@46 -- # : 0 00:04:11.205 16:03:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:11.205 16:03:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:11.205 16:03:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:11.205 16:03:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.205 16:03:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.205 16:03:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:11.205 16:03:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:11.205 16:03:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:11.205 16:03:10 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:11.205 16:03:10 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:11.205 16:03:10 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:11.205 16:03:10 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:11.205 16:03:10 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:11.205 16:03:10 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:11.205 16:03:10 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:11.205 16:03:10 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:11.205 16:03:10 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:11.205 16:03:10 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:11.205 16:03:10 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:11.205 16:03:10 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:11.205 INFO: JSON configuration test init 00:04:11.205 16:03:10 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:11.205 16:03:10 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:11.205 16:03:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.205 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 16:03:10 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:11.205 16:03:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.205 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 16:03:10 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:11.205 16:03:10 -- json_config/json_config.sh@98 -- # local app=target 00:04:11.205 16:03:10 -- json_config/json_config.sh@99 -- # shift 00:04:11.205 16:03:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:11.205 16:03:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:11.205 16:03:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=2867101 00:04:11.205 16:03:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:11.205 Waiting for target to run... 00:04:11.205 16:03:10 -- json_config/json_config.sh@114 -- # waitforlisten 2867101 /var/tmp/spdk_tgt.sock 00:04:11.205 16:03:10 -- common/autotest_common.sh@819 -- # '[' -z 2867101 ']' 00:04:11.205 16:03:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.205 16:03:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:11.205 16:03:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.205 16:03:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:11.205 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 16:03:10 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:11.466 [2024-04-23 16:03:10.193972] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:11.466 [2024-04-23 16:03:10.194100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867101 ] 00:04:11.466 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.727 [2024-04-23 16:03:10.496908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.727 [2024-04-23 16:03:10.577986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:11.727 [2024-04-23 16:03:10.578172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.987 16:03:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:11.987 16:03:10 -- common/autotest_common.sh@852 -- # return 0 00:04:11.987 16:03:10 -- json_config/json_config.sh@115 -- # echo '' 00:04:11.987 00:04:11.987 16:03:10 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:11.987 16:03:10 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:11.988 16:03:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.988 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.988 16:03:10 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:11.988 16:03:10 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:11.988 16:03:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:11.988 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:11.988 16:03:10 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:11.988 16:03:10 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:11.988 16:03:10 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:13.374 16:03:12 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:13.374 16:03:12 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:13.374 16:03:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.374 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:13.374 16:03:12 -- json_config/json_config.sh@48 -- # local ret=0 00:04:13.374 16:03:12 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:13.374 16:03:12 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:13.374 16:03:12 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:13.374 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:13.374 16:03:12 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:13.374 16:03:12 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:13.374 16:03:12 -- json_config/json_config.sh@51 -- # local get_types 00:04:13.374 16:03:12 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:13.374 16:03:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:13.374 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:13.374 16:03:12 -- json_config/json_config.sh@58 -- # return 0 00:04:13.374 16:03:12 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:13.374 16:03:12 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:13.374 16:03:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:13.374 16:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:13.374 16:03:12 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:13.374 16:03:12 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:13.374 16:03:12 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.374 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:13.634 MallocForNvmf0 00:04:13.634 16:03:12 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.634 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:13.634 MallocForNvmf1 00:04:13.634 16:03:12 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:13.634 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:13.893 [2024-04-23 16:03:12.685034] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.893 16:03:12 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:13.893 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:14.154 16:03:12 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:14.155 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:14.155 16:03:12 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:14.155 16:03:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:14.416 16:03:13 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:14.416 16:03:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:14.416 [2024-04-23 16:03:13.273554] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:14.416 16:03:13 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:14.416 16:03:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:14.416 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.416 16:03:13 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:14.416 16:03:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:14.416 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.676 16:03:13 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:14.676 16:03:13 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:14.676 16:03:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:14.676 MallocBdevForConfigChangeCheck 00:04:14.676 16:03:13 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:14.676 16:03:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:14.676 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:14.676 16:03:13 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:14.676 16:03:13 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.937 16:03:13 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:14.937 INFO: shutting down applications... 00:04:14.937 16:03:13 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:14.937 16:03:13 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:14.937 16:03:13 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:14.937 16:03:13 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:16.850 Calling clear_iscsi_subsystem 00:04:16.850 Calling clear_nvmf_subsystem 00:04:16.850 Calling clear_nbd_subsystem 00:04:16.850 Calling clear_ublk_subsystem 00:04:16.850 Calling clear_vhost_blk_subsystem 00:04:16.850 Calling clear_vhost_scsi_subsystem 00:04:16.850 Calling clear_scheduler_subsystem 00:04:16.850 Calling clear_bdev_subsystem 00:04:16.850 Calling clear_accel_subsystem 00:04:16.850 Calling clear_vmd_subsystem 00:04:16.850 Calling clear_sock_subsystem 00:04:16.850 Calling clear_iobuf_subsystem 00:04:16.850 16:03:15 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:16.850 16:03:15 -- json_config/json_config.sh@396 -- # count=100 00:04:16.850 16:03:15 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:16.850 16:03:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.851 16:03:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:16.851 16:03:15 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:17.111 16:03:16 -- json_config/json_config.sh@398 -- # break 00:04:17.111 16:03:16 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:17.111 16:03:16 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:17.111 16:03:16 -- json_config/json_config.sh@120 -- # local app=target 00:04:17.111 16:03:16 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:17.111 16:03:16 -- json_config/json_config.sh@124 -- # [[ -n 2867101 ]] 00:04:17.111 16:03:16 -- json_config/json_config.sh@127 -- # kill -SIGINT 2867101 00:04:17.111 16:03:16 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:17.111 16:03:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:17.111 16:03:16 -- json_config/json_config.sh@130 -- # kill -0 2867101 00:04:17.111 16:03:16 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:17.684 16:03:16 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:17.684 16:03:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:17.684 16:03:16 -- json_config/json_config.sh@130 -- # kill -0 2867101 00:04:17.684 16:03:16 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:17.684 16:03:16 -- json_config/json_config.sh@132 -- # break 00:04:17.684 16:03:16 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:17.684 16:03:16 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:17.684 SPDK target shutdown done 00:04:17.684 16:03:16 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:17.684 INFO: relaunching applications... 00:04:17.684 16:03:16 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.684 16:03:16 -- json_config/json_config.sh@98 -- # local app=target 00:04:17.684 16:03:16 -- json_config/json_config.sh@99 -- # shift 00:04:17.684 16:03:16 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:17.684 16:03:16 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:17.684 16:03:16 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:17.684 16:03:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:17.684 16:03:16 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:17.684 16:03:16 -- json_config/json_config.sh@111 -- # app_pid[$app]=2868434 00:04:17.684 16:03:16 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:17.684 Waiting for target to run... 00:04:17.684 16:03:16 -- json_config/json_config.sh@114 -- # waitforlisten 2868434 /var/tmp/spdk_tgt.sock 00:04:17.684 16:03:16 -- common/autotest_common.sh@819 -- # '[' -z 2868434 ']' 00:04:17.684 16:03:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.684 16:03:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:17.684 16:03:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.684 16:03:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:17.684 16:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:17.684 16:03:16 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.946 [2024-04-23 16:03:16.648396] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:17.946 [2024-04-23 16:03:16.648547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868434 ] 00:04:17.946 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.518 [2024-04-23 16:03:17.180866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.518 [2024-04-23 16:03:17.289885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:18.518 [2024-04-23 16:03:17.290115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.903 [2024-04-23 16:03:18.407979] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.903 [2024-04-23 16:03:18.440278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:19.903 16:03:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:19.903 16:03:18 -- common/autotest_common.sh@852 -- # return 0 00:04:19.903 16:03:18 -- json_config/json_config.sh@115 -- # echo '' 00:04:19.903 00:04:19.903 16:03:18 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:19.903 16:03:18 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:19.903 INFO: Checking if target configuration is the same... 00:04:19.903 16:03:18 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.903 16:03:18 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:19.903 16:03:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.903 + '[' 2 -ne 2 ']' 00:04:19.903 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.903 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:19.903 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:19.903 +++ basename /dev/fd/62 00:04:19.903 ++ mktemp /tmp/62.XXX 00:04:19.903 + tmp_file_1=/tmp/62.bSq 00:04:19.903 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.903 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.903 + tmp_file_2=/tmp/spdk_tgt_config.json.vpi 00:04:19.903 + ret=0 00:04:19.903 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.163 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.163 + diff -u /tmp/62.bSq /tmp/spdk_tgt_config.json.vpi 00:04:20.163 + echo 'INFO: JSON config files are the same' 00:04:20.163 INFO: JSON config files are the same 00:04:20.163 + rm /tmp/62.bSq /tmp/spdk_tgt_config.json.vpi 00:04:20.163 + exit 0 00:04:20.163 16:03:18 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:20.163 16:03:18 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.163 INFO: changing configuration and checking if this can be detected... 00:04:20.163 16:03:18 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.163 16:03:18 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.163 16:03:19 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.163 16:03:19 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:20.163 16:03:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.163 + '[' 2 -ne 2 ']' 00:04:20.163 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.163 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:20.163 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:20.163 +++ basename /dev/fd/62 00:04:20.424 ++ mktemp /tmp/62.XXX 00:04:20.424 + tmp_file_1=/tmp/62.rLy 00:04:20.424 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.424 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.424 + tmp_file_2=/tmp/spdk_tgt_config.json.qqw 00:04:20.424 + ret=0 00:04:20.424 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.424 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.685 + diff -u /tmp/62.rLy /tmp/spdk_tgt_config.json.qqw 00:04:20.685 + ret=1 00:04:20.685 + echo '=== Start of file: /tmp/62.rLy ===' 00:04:20.685 + cat /tmp/62.rLy 00:04:20.685 + echo '=== End of file: /tmp/62.rLy ===' 00:04:20.685 + echo '' 00:04:20.685 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qqw ===' 00:04:20.685 + cat /tmp/spdk_tgt_config.json.qqw 00:04:20.685 + echo '=== End of file: /tmp/spdk_tgt_config.json.qqw ===' 00:04:20.685 + echo '' 00:04:20.685 + rm /tmp/62.rLy /tmp/spdk_tgt_config.json.qqw 00:04:20.685 + exit 1 00:04:20.685 16:03:19 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:20.685 INFO: configuration change detected. 00:04:20.685 16:03:19 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:20.685 16:03:19 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:20.685 16:03:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:20.685 16:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.685 16:03:19 -- json_config/json_config.sh@360 -- # local ret=0 00:04:20.685 16:03:19 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:20.685 16:03:19 -- json_config/json_config.sh@370 -- # [[ -n 2868434 ]] 00:04:20.685 16:03:19 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:20.685 16:03:19 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:20.685 16:03:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:20.685 16:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.685 16:03:19 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:20.685 16:03:19 -- json_config/json_config.sh@246 -- # uname -s 00:04:20.685 16:03:19 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:20.685 16:03:19 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:20.685 16:03:19 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:20.685 16:03:19 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:20.685 16:03:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:20.685 16:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:20.685 16:03:19 -- json_config/json_config.sh@376 -- # killprocess 2868434 00:04:20.685 16:03:19 -- common/autotest_common.sh@926 -- # '[' -z 2868434 ']' 00:04:20.685 16:03:19 -- common/autotest_common.sh@930 -- # kill -0 2868434 00:04:20.685 16:03:19 -- common/autotest_common.sh@931 -- # uname 00:04:20.685 16:03:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:20.685 16:03:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2868434 00:04:20.685 16:03:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:20.685 16:03:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:20.685 16:03:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2868434' 00:04:20.685 killing process with pid 2868434 00:04:20.685 16:03:19 -- common/autotest_common.sh@945 -- # kill 2868434 00:04:20.685 16:03:19 -- common/autotest_common.sh@950 -- # wait 2868434 00:04:22.068 16:03:20 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.068 16:03:20 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:22.068 16:03:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:22.068 16:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.068 16:03:20 -- json_config/json_config.sh@381 -- # return 0 00:04:22.068 16:03:20 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:22.068 INFO: Success 00:04:22.068 00:04:22.068 real 0m10.847s 00:04:22.068 user 0m11.353s 00:04:22.068 sys 0m2.148s 00:04:22.068 16:03:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.068 16:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.068 ************************************ 00:04:22.068 END TEST json_config 00:04:22.068 ************************************ 00:04:22.068 16:03:20 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.068 16:03:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.068 16:03:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.068 16:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.068 ************************************ 00:04:22.068 START TEST json_config_extra_key 00:04:22.068 ************************************ 00:04:22.068 16:03:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.068 16:03:20 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.068 16:03:20 -- nvmf/common.sh@7 -- # uname -s 00:04:22.068 16:03:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.068 16:03:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.068 16:03:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.068 16:03:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.068 16:03:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.068 16:03:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.068 16:03:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.068 16:03:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.068 16:03:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.068 16:03:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.068 16:03:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:22.069 16:03:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:22.069 16:03:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.069 16:03:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.069 16:03:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.069 16:03:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:22.069 16:03:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.069 16:03:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.069 16:03:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.069 16:03:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.069 16:03:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.069 16:03:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.069 16:03:20 -- paths/export.sh@5 -- # export PATH 00:04:22.069 16:03:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.069 16:03:20 -- nvmf/common.sh@46 -- # : 0 00:04:22.069 16:03:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:22.069 16:03:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:22.069 16:03:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:22.069 16:03:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.069 16:03:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.069 16:03:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:22.069 16:03:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:22.069 16:03:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:22.069 INFO: launching applications... 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2869445 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:22.069 Waiting for target to run... 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2869445 /var/tmp/spdk_tgt.sock 00:04:22.069 16:03:20 -- common/autotest_common.sh@819 -- # '[' -z 2869445 ']' 00:04:22.069 16:03:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.069 16:03:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:22.069 16:03:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.069 16:03:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:22.069 16:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:22.069 16:03:20 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.330 [2024-04-23 16:03:21.049479] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:22.330 [2024-04-23 16:03:21.049608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869445 ] 00:04:22.330 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.590 [2024-04-23 16:03:21.361110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.590 [2024-04-23 16:03:21.442402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:22.590 [2024-04-23 16:03:21.442591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.160 16:03:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:23.160 16:03:21 -- common/autotest_common.sh@852 -- # return 0 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:23.160 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:23.160 INFO: shutting down applications... 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2869445 ]] 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2869445 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2869445 00:04:23.160 16:03:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:23.419 16:03:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:23.419 16:03:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.419 16:03:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2869445 00:04:23.419 16:03:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2869445 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:23.990 SPDK target shutdown done 00:04:23.990 16:03:22 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:23.990 Success 00:04:23.990 00:04:23.990 real 0m1.903s 00:04:23.990 user 0m1.737s 00:04:23.990 sys 0m0.433s 00:04:23.990 16:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.990 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:23.990 ************************************ 00:04:23.990 END TEST json_config_extra_key 00:04:23.990 ************************************ 00:04:23.990 16:03:22 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.990 16:03:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.990 16:03:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.990 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:23.990 ************************************ 00:04:23.990 START TEST alias_rpc 00:04:23.990 ************************************ 00:04:23.990 16:03:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.990 * Looking for test storage... 00:04:23.990 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:04:23.990 16:03:22 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.990 16:03:22 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2869800 00:04:23.990 16:03:22 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.990 16:03:22 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2869800 00:04:23.990 16:03:22 -- common/autotest_common.sh@819 -- # '[' -z 2869800 ']' 00:04:23.990 16:03:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.990 16:03:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:23.990 16:03:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.990 16:03:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:23.990 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:24.249 [2024-04-23 16:03:22.955075] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:24.249 [2024-04-23 16:03:22.955162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869800 ] 00:04:24.249 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.249 [2024-04-23 16:03:23.045978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.249 [2024-04-23 16:03:23.147774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:24.249 [2024-04-23 16:03:23.147956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.184 16:03:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:25.184 16:03:23 -- common/autotest_common.sh@852 -- # return 0 00:04:25.184 16:03:23 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:25.184 16:03:23 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2869800 00:04:25.184 16:03:23 -- common/autotest_common.sh@926 -- # '[' -z 2869800 ']' 00:04:25.184 16:03:23 -- common/autotest_common.sh@930 -- # kill -0 2869800 00:04:25.184 16:03:23 -- common/autotest_common.sh@931 -- # uname 00:04:25.184 16:03:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:25.184 16:03:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2869800 00:04:25.184 16:03:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:25.184 16:03:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:25.184 16:03:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2869800' 00:04:25.184 killing process with pid 2869800 00:04:25.184 16:03:23 -- common/autotest_common.sh@945 -- # kill 2869800 00:04:25.184 16:03:23 -- common/autotest_common.sh@950 -- # wait 2869800 00:04:26.119 00:04:26.119 real 0m1.960s 00:04:26.119 user 0m2.015s 00:04:26.119 sys 0m0.415s 00:04:26.119 16:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.119 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.119 ************************************ 00:04:26.119 END TEST alias_rpc 00:04:26.119 ************************************ 00:04:26.119 16:03:24 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:26.119 16:03:24 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.119 16:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.119 16:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.119 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.119 ************************************ 00:04:26.119 START TEST spdkcli_tcp 00:04:26.119 ************************************ 00:04:26.119 16:03:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.119 * Looking for test storage... 00:04:26.119 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:04:26.119 16:03:24 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:26.119 16:03:24 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.119 16:03:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:26.119 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2870418 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@27 -- # waitforlisten 2870418 00:04:26.119 16:03:24 -- common/autotest_common.sh@819 -- # '[' -z 2870418 ']' 00:04:26.119 16:03:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.119 16:03:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:26.119 16:03:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.119 16:03:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:26.119 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:04:26.119 16:03:24 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.119 [2024-04-23 16:03:24.996226] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:26.119 [2024-04-23 16:03:24.996352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870418 ] 00:04:26.379 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.379 [2024-04-23 16:03:25.113736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.379 [2024-04-23 16:03:25.211077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:26.379 [2024-04-23 16:03:25.211313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.379 [2024-04-23 16:03:25.211321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.948 16:03:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:26.948 16:03:25 -- common/autotest_common.sh@852 -- # return 0 00:04:26.948 16:03:25 -- spdkcli/tcp.sh@31 -- # socat_pid=2870452 00:04:26.948 16:03:25 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:26.948 16:03:25 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:26.948 [ 00:04:26.948 "bdev_malloc_delete", 00:04:26.948 "bdev_malloc_create", 00:04:26.948 "bdev_null_resize", 00:04:26.948 "bdev_null_delete", 00:04:26.948 "bdev_null_create", 00:04:26.948 "bdev_nvme_cuse_unregister", 00:04:26.948 "bdev_nvme_cuse_register", 00:04:26.948 "bdev_opal_new_user", 00:04:26.948 "bdev_opal_set_lock_state", 00:04:26.948 "bdev_opal_delete", 00:04:26.948 "bdev_opal_get_info", 00:04:26.948 "bdev_opal_create", 00:04:26.948 "bdev_nvme_opal_revert", 00:04:26.948 "bdev_nvme_opal_init", 00:04:26.948 "bdev_nvme_send_cmd", 00:04:26.948 "bdev_nvme_get_path_iostat", 00:04:26.948 "bdev_nvme_get_mdns_discovery_info", 00:04:26.948 "bdev_nvme_stop_mdns_discovery", 00:04:26.948 "bdev_nvme_start_mdns_discovery", 00:04:26.948 "bdev_nvme_set_multipath_policy", 00:04:26.948 "bdev_nvme_set_preferred_path", 00:04:26.948 "bdev_nvme_get_io_paths", 00:04:26.948 "bdev_nvme_remove_error_injection", 00:04:26.948 "bdev_nvme_add_error_injection", 00:04:26.948 "bdev_nvme_get_discovery_info", 00:04:26.948 "bdev_nvme_stop_discovery", 00:04:26.948 "bdev_nvme_start_discovery", 00:04:26.948 "bdev_nvme_get_controller_health_info", 00:04:26.948 "bdev_nvme_disable_controller", 00:04:26.948 "bdev_nvme_enable_controller", 00:04:26.948 "bdev_nvme_reset_controller", 00:04:26.948 "bdev_nvme_get_transport_statistics", 00:04:26.948 "bdev_nvme_apply_firmware", 00:04:26.948 "bdev_nvme_detach_controller", 00:04:26.948 "bdev_nvme_get_controllers", 00:04:26.948 "bdev_nvme_attach_controller", 00:04:26.948 "bdev_nvme_set_hotplug", 00:04:26.948 "bdev_nvme_set_options", 00:04:26.948 "bdev_passthru_delete", 00:04:26.948 "bdev_passthru_create", 00:04:26.948 "bdev_lvol_grow_lvstore", 00:04:26.948 "bdev_lvol_get_lvols", 00:04:26.948 "bdev_lvol_get_lvstores", 00:04:26.948 "bdev_lvol_delete", 00:04:26.948 "bdev_lvol_set_read_only", 00:04:26.948 "bdev_lvol_resize", 00:04:26.948 "bdev_lvol_decouple_parent", 00:04:26.948 "bdev_lvol_inflate", 00:04:26.948 "bdev_lvol_rename", 00:04:26.948 "bdev_lvol_clone_bdev", 00:04:26.948 "bdev_lvol_clone", 00:04:26.948 "bdev_lvol_snapshot", 00:04:26.948 "bdev_lvol_create", 00:04:26.948 "bdev_lvol_delete_lvstore", 00:04:26.948 "bdev_lvol_rename_lvstore", 00:04:26.948 "bdev_lvol_create_lvstore", 00:04:26.948 "bdev_raid_set_options", 00:04:26.948 "bdev_raid_remove_base_bdev", 00:04:26.948 "bdev_raid_add_base_bdev", 00:04:26.948 "bdev_raid_delete", 00:04:26.948 "bdev_raid_create", 00:04:26.948 "bdev_raid_get_bdevs", 00:04:26.948 "bdev_error_inject_error", 00:04:26.948 "bdev_error_delete", 00:04:26.948 "bdev_error_create", 00:04:26.948 "bdev_split_delete", 00:04:26.948 "bdev_split_create", 00:04:26.948 "bdev_delay_delete", 00:04:26.948 "bdev_delay_create", 00:04:26.948 "bdev_delay_update_latency", 00:04:26.948 "bdev_zone_block_delete", 00:04:26.948 "bdev_zone_block_create", 00:04:26.948 "blobfs_create", 00:04:26.948 "blobfs_detect", 00:04:26.948 "blobfs_set_cache_size", 00:04:26.948 "bdev_aio_delete", 00:04:26.948 "bdev_aio_rescan", 00:04:26.948 "bdev_aio_create", 00:04:26.948 "bdev_ftl_set_property", 00:04:26.948 "bdev_ftl_get_properties", 00:04:26.948 "bdev_ftl_get_stats", 00:04:26.948 "bdev_ftl_unmap", 00:04:26.948 "bdev_ftl_unload", 00:04:26.948 "bdev_ftl_delete", 00:04:26.948 "bdev_ftl_load", 00:04:26.948 "bdev_ftl_create", 00:04:26.948 "bdev_virtio_attach_controller", 00:04:26.948 "bdev_virtio_scsi_get_devices", 00:04:26.948 "bdev_virtio_detach_controller", 00:04:26.948 "bdev_virtio_blk_set_hotplug", 00:04:26.948 "bdev_iscsi_delete", 00:04:26.948 "bdev_iscsi_create", 00:04:26.948 "bdev_iscsi_set_options", 00:04:26.948 "accel_error_inject_error", 00:04:26.948 "ioat_scan_accel_module", 00:04:26.948 "dsa_scan_accel_module", 00:04:26.948 "iaa_scan_accel_module", 00:04:26.948 "iscsi_set_options", 00:04:26.948 "iscsi_get_auth_groups", 00:04:26.948 "iscsi_auth_group_remove_secret", 00:04:26.948 "iscsi_auth_group_add_secret", 00:04:26.948 "iscsi_delete_auth_group", 00:04:26.948 "iscsi_create_auth_group", 00:04:26.948 "iscsi_set_discovery_auth", 00:04:26.948 "iscsi_get_options", 00:04:26.948 "iscsi_target_node_request_logout", 00:04:26.948 "iscsi_target_node_set_redirect", 00:04:26.948 "iscsi_target_node_set_auth", 00:04:26.948 "iscsi_target_node_add_lun", 00:04:26.948 "iscsi_get_connections", 00:04:26.948 "iscsi_portal_group_set_auth", 00:04:26.948 "iscsi_start_portal_group", 00:04:26.948 "iscsi_delete_portal_group", 00:04:26.948 "iscsi_create_portal_group", 00:04:26.948 "iscsi_get_portal_groups", 00:04:26.948 "iscsi_delete_target_node", 00:04:26.948 "iscsi_target_node_remove_pg_ig_maps", 00:04:26.948 "iscsi_target_node_add_pg_ig_maps", 00:04:26.948 "iscsi_create_target_node", 00:04:26.948 "iscsi_get_target_nodes", 00:04:26.948 "iscsi_delete_initiator_group", 00:04:26.948 "iscsi_initiator_group_remove_initiators", 00:04:26.948 "iscsi_initiator_group_add_initiators", 00:04:26.948 "iscsi_create_initiator_group", 00:04:26.948 "iscsi_get_initiator_groups", 00:04:26.948 "nvmf_set_crdt", 00:04:26.948 "nvmf_set_config", 00:04:26.948 "nvmf_set_max_subsystems", 00:04:26.948 "nvmf_subsystem_get_listeners", 00:04:26.948 "nvmf_subsystem_get_qpairs", 00:04:26.948 "nvmf_subsystem_get_controllers", 00:04:26.948 "nvmf_get_stats", 00:04:26.948 "nvmf_get_transports", 00:04:26.948 "nvmf_create_transport", 00:04:26.948 "nvmf_get_targets", 00:04:26.948 "nvmf_delete_target", 00:04:26.948 "nvmf_create_target", 00:04:26.948 "nvmf_subsystem_allow_any_host", 00:04:26.948 "nvmf_subsystem_remove_host", 00:04:26.948 "nvmf_subsystem_add_host", 00:04:26.948 "nvmf_subsystem_remove_ns", 00:04:26.948 "nvmf_subsystem_add_ns", 00:04:26.948 "nvmf_subsystem_listener_set_ana_state", 00:04:26.948 "nvmf_discovery_get_referrals", 00:04:26.948 "nvmf_discovery_remove_referral", 00:04:26.948 "nvmf_discovery_add_referral", 00:04:26.948 "nvmf_subsystem_remove_listener", 00:04:26.948 "nvmf_subsystem_add_listener", 00:04:26.948 "nvmf_delete_subsystem", 00:04:26.948 "nvmf_create_subsystem", 00:04:26.948 "nvmf_get_subsystems", 00:04:26.948 "env_dpdk_get_mem_stats", 00:04:26.948 "nbd_get_disks", 00:04:26.948 "nbd_stop_disk", 00:04:26.948 "nbd_start_disk", 00:04:26.948 "ublk_recover_disk", 00:04:26.948 "ublk_get_disks", 00:04:26.948 "ublk_stop_disk", 00:04:26.948 "ublk_start_disk", 00:04:26.948 "ublk_destroy_target", 00:04:26.948 "ublk_create_target", 00:04:26.949 "virtio_blk_create_transport", 00:04:26.949 "virtio_blk_get_transports", 00:04:26.949 "vhost_controller_set_coalescing", 00:04:26.949 "vhost_get_controllers", 00:04:26.949 "vhost_delete_controller", 00:04:26.949 "vhost_create_blk_controller", 00:04:26.949 "vhost_scsi_controller_remove_target", 00:04:26.949 "vhost_scsi_controller_add_target", 00:04:26.949 "vhost_start_scsi_controller", 00:04:26.949 "vhost_create_scsi_controller", 00:04:26.949 "thread_set_cpumask", 00:04:26.949 "framework_get_scheduler", 00:04:26.949 "framework_set_scheduler", 00:04:26.949 "framework_get_reactors", 00:04:26.949 "thread_get_io_channels", 00:04:26.949 "thread_get_pollers", 00:04:26.949 "thread_get_stats", 00:04:26.949 "framework_monitor_context_switch", 00:04:26.949 "spdk_kill_instance", 00:04:26.949 "log_enable_timestamps", 00:04:26.949 "log_get_flags", 00:04:26.949 "log_clear_flag", 00:04:26.949 "log_set_flag", 00:04:26.949 "log_get_level", 00:04:26.949 "log_set_level", 00:04:26.949 "log_get_print_level", 00:04:26.949 "log_set_print_level", 00:04:26.949 "framework_enable_cpumask_locks", 00:04:26.949 "framework_disable_cpumask_locks", 00:04:26.949 "framework_wait_init", 00:04:26.949 "framework_start_init", 00:04:26.949 "scsi_get_devices", 00:04:26.949 "bdev_get_histogram", 00:04:26.949 "bdev_enable_histogram", 00:04:26.949 "bdev_set_qos_limit", 00:04:26.949 "bdev_set_qd_sampling_period", 00:04:26.949 "bdev_get_bdevs", 00:04:26.949 "bdev_reset_iostat", 00:04:26.949 "bdev_get_iostat", 00:04:26.949 "bdev_examine", 00:04:26.949 "bdev_wait_for_examine", 00:04:26.949 "bdev_set_options", 00:04:26.949 "notify_get_notifications", 00:04:26.949 "notify_get_types", 00:04:26.949 "accel_get_stats", 00:04:26.949 "accel_set_options", 00:04:26.949 "accel_set_driver", 00:04:26.949 "accel_crypto_key_destroy", 00:04:26.949 "accel_crypto_keys_get", 00:04:26.949 "accel_crypto_key_create", 00:04:26.949 "accel_assign_opc", 00:04:26.949 "accel_get_module_info", 00:04:26.949 "accel_get_opc_assignments", 00:04:26.949 "vmd_rescan", 00:04:26.949 "vmd_remove_device", 00:04:26.949 "vmd_enable", 00:04:26.949 "sock_set_default_impl", 00:04:26.949 "sock_impl_set_options", 00:04:26.949 "sock_impl_get_options", 00:04:26.949 "iobuf_get_stats", 00:04:26.949 "iobuf_set_options", 00:04:26.949 "framework_get_pci_devices", 00:04:26.949 "framework_get_config", 00:04:26.949 "framework_get_subsystems", 00:04:26.949 "trace_get_info", 00:04:26.949 "trace_get_tpoint_group_mask", 00:04:26.949 "trace_disable_tpoint_group", 00:04:26.949 "trace_enable_tpoint_group", 00:04:26.949 "trace_clear_tpoint_mask", 00:04:26.949 "trace_set_tpoint_mask", 00:04:26.949 "spdk_get_version", 00:04:26.949 "rpc_get_methods" 00:04:26.949 ] 00:04:26.949 16:03:25 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:26.949 16:03:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:26.949 16:03:25 -- common/autotest_common.sh@10 -- # set +x 00:04:26.949 16:03:25 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:26.949 16:03:25 -- spdkcli/tcp.sh@38 -- # killprocess 2870418 00:04:26.949 16:03:25 -- common/autotest_common.sh@926 -- # '[' -z 2870418 ']' 00:04:26.949 16:03:25 -- common/autotest_common.sh@930 -- # kill -0 2870418 00:04:26.949 16:03:25 -- common/autotest_common.sh@931 -- # uname 00:04:26.949 16:03:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:26.949 16:03:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2870418 00:04:27.208 16:03:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:27.208 16:03:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:27.208 16:03:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2870418' 00:04:27.208 killing process with pid 2870418 00:04:27.208 16:03:25 -- common/autotest_common.sh@945 -- # kill 2870418 00:04:27.208 16:03:25 -- common/autotest_common.sh@950 -- # wait 2870418 00:04:28.152 00:04:28.152 real 0m1.915s 00:04:28.152 user 0m3.262s 00:04:28.152 sys 0m0.488s 00:04:28.152 16:03:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.152 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.152 ************************************ 00:04:28.152 END TEST spdkcli_tcp 00:04:28.152 ************************************ 00:04:28.152 16:03:26 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.152 16:03:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.152 16:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.152 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.152 ************************************ 00:04:28.152 START TEST dpdk_mem_utility 00:04:28.152 ************************************ 00:04:28.152 16:03:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.152 * Looking for test storage... 00:04:28.152 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:04:28.152 16:03:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.152 16:03:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2870807 00:04:28.152 16:03:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2870807 00:04:28.152 16:03:26 -- common/autotest_common.sh@819 -- # '[' -z 2870807 ']' 00:04:28.152 16:03:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.152 16:03:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:28.152 16:03:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.152 16:03:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:28.152 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:28.152 16:03:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.152 [2024-04-23 16:03:26.979595] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:28.153 [2024-04-23 16:03:26.979756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870807 ] 00:04:28.153 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.415 [2024-04-23 16:03:27.112742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.415 [2024-04-23 16:03:27.208176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:28.415 [2024-04-23 16:03:27.208399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.989 16:03:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:28.989 16:03:27 -- common/autotest_common.sh@852 -- # return 0 00:04:28.989 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.989 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.989 16:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:28.989 16:03:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.989 { 00:04:28.989 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.989 } 00:04:28.989 16:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:28.989 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.989 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:28.989 1 heaps totaling size 820.000000 MiB 00:04:28.989 size: 820.000000 MiB heap id: 0 00:04:28.989 end heaps---------- 00:04:28.989 8 mempools totaling size 598.116089 MiB 00:04:28.989 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.989 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.989 size: 84.521057 MiB name: bdev_io_2870807 00:04:28.989 size: 51.011292 MiB name: evtpool_2870807 00:04:28.989 size: 50.003479 MiB name: msgpool_2870807 00:04:28.989 size: 21.763794 MiB name: PDU_Pool 00:04:28.989 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.989 size: 0.026123 MiB name: Session_Pool 00:04:28.989 end mempools------- 00:04:28.989 6 memzones totaling size 4.142822 MiB 00:04:28.989 size: 1.000366 MiB name: RG_ring_0_2870807 00:04:28.989 size: 1.000366 MiB name: RG_ring_1_2870807 00:04:28.989 size: 1.000366 MiB name: RG_ring_4_2870807 00:04:28.989 size: 1.000366 MiB name: RG_ring_5_2870807 00:04:28.989 size: 0.125366 MiB name: RG_ring_2_2870807 00:04:28.989 size: 0.015991 MiB name: RG_ring_3_2870807 00:04:28.989 end memzones------- 00:04:28.989 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.989 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:04:28.989 list of free elements. size: 18.514832 MiB 00:04:28.989 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:28.989 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:28.989 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:28.989 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:28.989 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:28.989 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:28.989 element at address: 0x200019600000 with size: 0.999329 MiB 00:04:28.989 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:28.989 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:28.989 element at address: 0x200018e00000 with size: 0.959900 MiB 00:04:28.989 element at address: 0x200019900040 with size: 0.937256 MiB 00:04:28.989 element at address: 0x200000200000 with size: 0.840942 MiB 00:04:28.989 element at address: 0x20001b000000 with size: 0.583191 MiB 00:04:28.989 element at address: 0x200019200000 with size: 0.491150 MiB 00:04:28.989 element at address: 0x200019a00000 with size: 0.485657 MiB 00:04:28.989 element at address: 0x200013800000 with size: 0.470581 MiB 00:04:28.989 element at address: 0x200028400000 with size: 0.411072 MiB 00:04:28.989 element at address: 0x200003a00000 with size: 0.356140 MiB 00:04:28.989 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:04:28.989 list of standard malloc elements. size: 199.220764 MiB 00:04:28.989 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:28.989 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:28.989 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:28.989 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:28.989 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:28.989 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:28.989 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:28.989 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:28.989 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:04:28.989 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:04:28.989 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:28.989 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:28.989 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:28.989 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:28.989 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:28.989 list of memzone associated elements. size: 602.264404 MiB 00:04:28.989 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:28.989 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.989 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:28.989 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.989 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:28.989 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2870807_0 00:04:28.989 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:28.989 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2870807_0 00:04:28.989 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:28.989 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2870807_0 00:04:28.989 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:28.989 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.989 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:28.989 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.989 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:28.989 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2870807 00:04:28.989 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:28.989 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2870807 00:04:28.989 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:28.989 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2870807 00:04:28.989 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:28.989 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.989 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:28.989 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.989 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:28.989 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.989 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:28.989 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.989 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:28.989 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2870807 00:04:28.989 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:28.989 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2870807 00:04:28.989 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:28.989 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2870807 00:04:28.989 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:28.989 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2870807 00:04:28.989 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:28.989 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2870807 00:04:28.989 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:04:28.989 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.989 element at address: 0x200013878780 with size: 0.500549 MiB 00:04:28.989 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.989 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:04:28.989 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.989 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:28.989 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2870807 00:04:28.989 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:04:28.989 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.989 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:04:28.989 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.989 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:28.989 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2870807 00:04:28.989 element at address: 0x20002846f540 with size: 0.002502 MiB 00:04:28.989 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.989 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:04:28.989 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2870807 00:04:28.989 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:28.989 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2870807 00:04:28.989 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:04:28.989 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.989 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.990 16:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2870807 00:04:28.990 16:03:27 -- common/autotest_common.sh@926 -- # '[' -z 2870807 ']' 00:04:28.990 16:03:27 -- common/autotest_common.sh@930 -- # kill -0 2870807 00:04:28.990 16:03:27 -- common/autotest_common.sh@931 -- # uname 00:04:28.990 16:03:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:28.990 16:03:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2870807 00:04:28.990 16:03:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:28.990 16:03:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:28.990 16:03:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2870807' 00:04:28.990 killing process with pid 2870807 00:04:28.990 16:03:27 -- common/autotest_common.sh@945 -- # kill 2870807 00:04:28.990 16:03:27 -- common/autotest_common.sh@950 -- # wait 2870807 00:04:29.935 00:04:29.935 real 0m1.909s 00:04:29.935 user 0m1.862s 00:04:29.935 sys 0m0.493s 00:04:29.935 16:03:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.935 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.935 ************************************ 00:04:29.935 END TEST dpdk_mem_utility 00:04:29.935 ************************************ 00:04:29.935 16:03:28 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:29.935 16:03:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.935 16:03:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.935 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.935 ************************************ 00:04:29.935 START TEST event 00:04:29.935 ************************************ 00:04:29.935 16:03:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:29.935 * Looking for test storage... 00:04:29.935 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:29.935 16:03:28 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:29.935 16:03:28 -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.935 16:03:28 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.935 16:03:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:29.935 16:03:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.935 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:29.935 ************************************ 00:04:29.935 START TEST event_perf 00:04:29.935 ************************************ 00:04:29.935 16:03:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:30.196 Running I/O for 1 seconds...[2024-04-23 16:03:28.870423] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:30.196 [2024-04-23 16:03:28.870569] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871172 ] 00:04:30.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.196 [2024-04-23 16:03:29.005237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:30.196 [2024-04-23 16:03:29.106162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.196 [2024-04-23 16:03:29.106203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.197 [2024-04-23 16:03:29.106229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.197 [2024-04-23 16:03:29.106253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.684 Running I/O for 1 seconds... 00:04:31.684 lcore 0: 161602 00:04:31.684 lcore 1: 161603 00:04:31.684 lcore 2: 161602 00:04:31.684 lcore 3: 161602 00:04:31.684 done. 00:04:31.684 00:04:31.684 real 0m1.438s 00:04:31.684 user 0m4.261s 00:04:31.684 sys 0m0.159s 00:04:31.684 16:03:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.684 16:03:30 -- common/autotest_common.sh@10 -- # set +x 00:04:31.684 ************************************ 00:04:31.684 END TEST event_perf 00:04:31.684 ************************************ 00:04:31.684 16:03:30 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:31.684 16:03:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:31.684 16:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.684 16:03:30 -- common/autotest_common.sh@10 -- # set +x 00:04:31.684 ************************************ 00:04:31.684 START TEST event_reactor 00:04:31.684 ************************************ 00:04:31.684 16:03:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:31.684 [2024-04-23 16:03:30.349331] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:31.684 [2024-04-23 16:03:30.349472] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871490 ] 00:04:31.684 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.684 [2024-04-23 16:03:30.481700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.684 [2024-04-23 16:03:30.577167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.089 test_start 00:04:33.089 oneshot 00:04:33.089 tick 100 00:04:33.089 tick 100 00:04:33.089 tick 250 00:04:33.089 tick 100 00:04:33.089 tick 100 00:04:33.089 tick 250 00:04:33.089 tick 100 00:04:33.089 tick 500 00:04:33.089 tick 100 00:04:33.089 tick 100 00:04:33.089 tick 250 00:04:33.089 tick 100 00:04:33.089 tick 100 00:04:33.089 test_end 00:04:33.089 00:04:33.089 real 0m1.428s 00:04:33.089 user 0m1.264s 00:04:33.089 sys 0m0.152s 00:04:33.089 16:03:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.089 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.089 ************************************ 00:04:33.089 END TEST event_reactor 00:04:33.089 ************************************ 00:04:33.089 16:03:31 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.089 16:03:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:33.089 16:03:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.089 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:04:33.089 ************************************ 00:04:33.089 START TEST event_reactor_perf 00:04:33.089 ************************************ 00:04:33.089 16:03:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.089 [2024-04-23 16:03:31.816611] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:33.089 [2024-04-23 16:03:31.816756] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871818 ] 00:04:33.089 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.089 [2024-04-23 16:03:31.948402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.351 [2024-04-23 16:03:32.046004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.295 test_start 00:04:34.295 test_end 00:04:34.295 Performance: 376728 events per second 00:04:34.295 00:04:34.295 real 0m1.436s 00:04:34.295 user 0m1.287s 00:04:34.295 sys 0m0.140s 00:04:34.295 16:03:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.295 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:04:34.295 ************************************ 00:04:34.295 END TEST event_reactor_perf 00:04:34.295 ************************************ 00:04:34.556 16:03:33 -- event/event.sh@49 -- # uname -s 00:04:34.557 16:03:33 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:34.557 16:03:33 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:34.557 16:03:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.557 16:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.557 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:04:34.557 ************************************ 00:04:34.557 START TEST event_scheduler 00:04:34.557 ************************************ 00:04:34.557 16:03:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:34.557 * Looking for test storage... 00:04:34.557 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:04:34.557 16:03:33 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:34.557 16:03:33 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2872155 00:04:34.557 16:03:33 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.557 16:03:33 -- scheduler/scheduler.sh@37 -- # waitforlisten 2872155 00:04:34.557 16:03:33 -- common/autotest_common.sh@819 -- # '[' -z 2872155 ']' 00:04:34.557 16:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.557 16:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:34.557 16:03:33 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:34.557 16:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.557 16:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:34.557 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:04:34.557 [2024-04-23 16:03:33.383695] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:34.557 [2024-04-23 16:03:33.383824] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872155 ] 00:04:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.817 [2024-04-23 16:03:33.501152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.817 [2024-04-23 16:03:33.603261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.817 [2024-04-23 16:03:33.603382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.817 [2024-04-23 16:03:33.603489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.817 [2024-04-23 16:03:33.603499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.389 16:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:35.389 16:03:34 -- common/autotest_common.sh@852 -- # return 0 00:04:35.389 16:03:34 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:35.389 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.389 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.389 POWER: Env isn't set yet! 00:04:35.389 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:35.389 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:35.389 POWER: Cannot set governor of lcore 0 to userspace 00:04:35.389 POWER: Attempting to initialise PSTAT power management... 00:04:35.389 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:35.389 POWER: Initialized successfully for lcore 0 power management 00:04:35.389 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:35.389 POWER: Initialized successfully for lcore 1 power management 00:04:35.389 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:35.389 POWER: Initialized successfully for lcore 2 power management 00:04:35.389 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:35.389 POWER: Initialized successfully for lcore 3 power management 00:04:35.389 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.389 16:03:34 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:35.389 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.389 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 [2024-04-23 16:03:34.371864] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:35.651 16:03:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.651 16:03:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 ************************************ 00:04:35.651 START TEST scheduler_create_thread 00:04:35.651 ************************************ 00:04:35.651 16:03:34 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 2 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 3 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 4 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 5 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 6 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 7 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 8 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 9 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 10 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.651 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.651 16:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:35.651 16:03:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:35.651 16:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.652 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:04:37.038 16:03:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:37.038 00:04:37.038 real 0m1.173s 00:04:37.038 user 0m0.011s 00:04:37.038 sys 0m0.006s 00:04:37.038 16:03:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.038 16:03:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.038 ************************************ 00:04:37.038 END TEST scheduler_create_thread 00:04:37.038 ************************************ 00:04:37.038 16:03:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.038 16:03:35 -- scheduler/scheduler.sh@46 -- # killprocess 2872155 00:04:37.038 16:03:35 -- common/autotest_common.sh@926 -- # '[' -z 2872155 ']' 00:04:37.038 16:03:35 -- common/autotest_common.sh@930 -- # kill -0 2872155 00:04:37.038 16:03:35 -- common/autotest_common.sh@931 -- # uname 00:04:37.038 16:03:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:37.038 16:03:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2872155 00:04:37.038 16:03:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:37.038 16:03:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:37.038 16:03:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2872155' 00:04:37.038 killing process with pid 2872155 00:04:37.038 16:03:35 -- common/autotest_common.sh@945 -- # kill 2872155 00:04:37.038 16:03:35 -- common/autotest_common.sh@950 -- # wait 2872155 00:04:37.299 [2024-04-23 16:03:36.032724] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:37.562 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:37.562 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:37.562 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:37.562 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:37.562 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:37.562 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:37.562 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:37.562 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:37.821 00:04:37.821 real 0m3.253s 00:04:37.821 user 0m5.371s 00:04:37.821 sys 0m0.417s 00:04:37.821 16:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.821 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:04:37.821 ************************************ 00:04:37.821 END TEST event_scheduler 00:04:37.821 ************************************ 00:04:37.821 16:03:36 -- event/event.sh@51 -- # modprobe -n nbd 00:04:37.821 16:03:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:37.821 16:03:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.821 16:03:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.822 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:04:37.822 ************************************ 00:04:37.822 START TEST app_repeat 00:04:37.822 ************************************ 00:04:37.822 16:03:36 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:37.822 16:03:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.822 16:03:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.822 16:03:36 -- event/event.sh@13 -- # local nbd_list 00:04:37.822 16:03:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.822 16:03:36 -- event/event.sh@14 -- # local bdev_list 00:04:37.822 16:03:36 -- event/event.sh@15 -- # local repeat_times=4 00:04:37.822 16:03:36 -- event/event.sh@17 -- # modprobe nbd 00:04:37.822 16:03:36 -- event/event.sh@19 -- # repeat_pid=2872808 00:04:37.822 16:03:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.822 16:03:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2872808' 00:04:37.822 Process app_repeat pid: 2872808 00:04:37.822 16:03:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:37.822 16:03:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:37.822 spdk_app_start Round 0 00:04:37.822 16:03:36 -- event/event.sh@25 -- # waitforlisten 2872808 /var/tmp/spdk-nbd.sock 00:04:37.822 16:03:36 -- common/autotest_common.sh@819 -- # '[' -z 2872808 ']' 00:04:37.822 16:03:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.822 16:03:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:37.822 16:03:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.822 16:03:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:37.822 16:03:36 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:37.822 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:04:37.822 [2024-04-23 16:03:36.606223] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:37.822 [2024-04-23 16:03:36.606366] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872808 ] 00:04:37.822 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.822 [2024-04-23 16:03:36.745671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.081 [2024-04-23 16:03:36.844765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.081 [2024-04-23 16:03:36.844766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.652 16:03:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:38.652 16:03:37 -- common/autotest_common.sh@852 -- # return 0 00:04:38.652 16:03:37 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.652 Malloc0 00:04:38.652 16:03:37 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.913 Malloc1 00:04:38.913 16:03:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@12 -- # local i 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.913 16:03:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.913 /dev/nbd0 00:04:39.174 16:03:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.174 16:03:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.174 16:03:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:39.174 16:03:37 -- common/autotest_common.sh@857 -- # local i 00:04:39.174 16:03:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:39.174 16:03:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:39.174 16:03:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:39.174 16:03:37 -- common/autotest_common.sh@861 -- # break 00:04:39.174 16:03:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:39.174 16:03:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:39.174 16:03:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.174 1+0 records in 00:04:39.174 1+0 records out 00:04:39.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255491 s, 16.0 MB/s 00:04:39.174 16:03:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:39.174 16:03:37 -- common/autotest_common.sh@874 -- # size=4096 00:04:39.174 16:03:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:39.174 16:03:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:39.174 16:03:37 -- common/autotest_common.sh@877 -- # return 0 00:04:39.174 16:03:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.174 16:03:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.174 16:03:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.174 /dev/nbd1 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.174 16:03:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:39.174 16:03:38 -- common/autotest_common.sh@857 -- # local i 00:04:39.174 16:03:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:39.174 16:03:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:39.174 16:03:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:39.174 16:03:38 -- common/autotest_common.sh@861 -- # break 00:04:39.174 16:03:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:39.174 16:03:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:39.174 16:03:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.174 1+0 records in 00:04:39.174 1+0 records out 00:04:39.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189721 s, 21.6 MB/s 00:04:39.174 16:03:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:39.174 16:03:38 -- common/autotest_common.sh@874 -- # size=4096 00:04:39.174 16:03:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:39.174 16:03:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:39.174 16:03:38 -- common/autotest_common.sh@877 -- # return 0 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.174 16:03:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.435 { 00:04:39.435 "nbd_device": "/dev/nbd0", 00:04:39.435 "bdev_name": "Malloc0" 00:04:39.435 }, 00:04:39.435 { 00:04:39.435 "nbd_device": "/dev/nbd1", 00:04:39.435 "bdev_name": "Malloc1" 00:04:39.435 } 00:04:39.435 ]' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.435 { 00:04:39.435 "nbd_device": "/dev/nbd0", 00:04:39.435 "bdev_name": "Malloc0" 00:04:39.435 }, 00:04:39.435 { 00:04:39.435 "nbd_device": "/dev/nbd1", 00:04:39.435 "bdev_name": "Malloc1" 00:04:39.435 } 00:04:39.435 ]' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.435 /dev/nbd1' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.435 /dev/nbd1' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.435 256+0 records in 00:04:39.435 256+0 records out 00:04:39.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004777 s, 220 MB/s 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.435 256+0 records in 00:04:39.435 256+0 records out 00:04:39.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015228 s, 68.9 MB/s 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.435 256+0 records in 00:04:39.435 256+0 records out 00:04:39.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180743 s, 58.0 MB/s 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@51 -- # local i 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.435 16:03:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.695 16:03:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.695 16:03:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@41 -- # break 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.696 16:03:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@41 -- # break 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@65 -- # true 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.956 16:03:38 -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.956 16:03:38 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.217 16:03:39 -- event/event.sh@35 -- # sleep 3 00:04:40.786 [2024-04-23 16:03:39.518079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.786 [2024-04-23 16:03:39.604830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.786 [2024-04-23 16:03:39.604832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.786 [2024-04-23 16:03:39.684720] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:40.786 [2024-04-23 16:03:39.684762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.319 16:03:42 -- event/event.sh@23 -- # for i in {0..2} 00:04:43.320 16:03:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:43.320 spdk_app_start Round 1 00:04:43.320 16:03:42 -- event/event.sh@25 -- # waitforlisten 2872808 /var/tmp/spdk-nbd.sock 00:04:43.320 16:03:42 -- common/autotest_common.sh@819 -- # '[' -z 2872808 ']' 00:04:43.320 16:03:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.320 16:03:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:43.320 16:03:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.320 16:03:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:43.320 16:03:42 -- common/autotest_common.sh@10 -- # set +x 00:04:43.320 16:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.320 16:03:42 -- common/autotest_common.sh@852 -- # return 0 00:04:43.320 16:03:42 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.580 Malloc0 00:04:43.580 16:03:42 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.841 Malloc1 00:04:43.841 16:03:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@12 -- # local i 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.841 /dev/nbd0 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.841 16:03:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:43.841 16:03:42 -- common/autotest_common.sh@857 -- # local i 00:04:43.841 16:03:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:43.841 16:03:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:43.841 16:03:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:43.841 16:03:42 -- common/autotest_common.sh@861 -- # break 00:04:43.841 16:03:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:43.841 16:03:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:43.841 16:03:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.841 1+0 records in 00:04:43.841 1+0 records out 00:04:43.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301971 s, 13.6 MB/s 00:04:43.841 16:03:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:43.841 16:03:42 -- common/autotest_common.sh@874 -- # size=4096 00:04:43.841 16:03:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:43.841 16:03:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:43.841 16:03:42 -- common/autotest_common.sh@877 -- # return 0 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.841 16:03:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.102 /dev/nbd1 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.102 16:03:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:44.102 16:03:42 -- common/autotest_common.sh@857 -- # local i 00:04:44.102 16:03:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:44.102 16:03:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:44.102 16:03:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:44.102 16:03:42 -- common/autotest_common.sh@861 -- # break 00:04:44.102 16:03:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:44.102 16:03:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:44.102 16:03:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.102 1+0 records in 00:04:44.102 1+0 records out 00:04:44.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251354 s, 16.3 MB/s 00:04:44.102 16:03:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:44.102 16:03:42 -- common/autotest_common.sh@874 -- # size=4096 00:04:44.102 16:03:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:44.102 16:03:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:44.102 16:03:42 -- common/autotest_common.sh@877 -- # return 0 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.102 16:03:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.364 { 00:04:44.364 "nbd_device": "/dev/nbd0", 00:04:44.364 "bdev_name": "Malloc0" 00:04:44.364 }, 00:04:44.364 { 00:04:44.364 "nbd_device": "/dev/nbd1", 00:04:44.364 "bdev_name": "Malloc1" 00:04:44.364 } 00:04:44.364 ]' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.364 { 00:04:44.364 "nbd_device": "/dev/nbd0", 00:04:44.364 "bdev_name": "Malloc0" 00:04:44.364 }, 00:04:44.364 { 00:04:44.364 "nbd_device": "/dev/nbd1", 00:04:44.364 "bdev_name": "Malloc1" 00:04:44.364 } 00:04:44.364 ]' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.364 /dev/nbd1' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.364 /dev/nbd1' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.364 256+0 records in 00:04:44.364 256+0 records out 00:04:44.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524302 s, 200 MB/s 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.364 256+0 records in 00:04:44.364 256+0 records out 00:04:44.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150601 s, 69.6 MB/s 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.364 256+0 records in 00:04:44.364 256+0 records out 00:04:44.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172078 s, 60.9 MB/s 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@51 -- # local i 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.364 16:03:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@41 -- # break 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@41 -- # break 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.625 16:03:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.885 16:03:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.885 16:03:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@65 -- # true 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.886 16:03:43 -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.886 16:03:43 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.166 16:03:43 -- event/event.sh@35 -- # sleep 3 00:04:45.738 [2024-04-23 16:03:44.374188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.738 [2024-04-23 16:03:44.462206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.738 [2024-04-23 16:03:44.462211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.738 [2024-04-23 16:03:44.542768] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.738 [2024-04-23 16:03:44.542803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.282 16:03:46 -- event/event.sh@23 -- # for i in {0..2} 00:04:48.282 16:03:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:48.282 spdk_app_start Round 2 00:04:48.282 16:03:46 -- event/event.sh@25 -- # waitforlisten 2872808 /var/tmp/spdk-nbd.sock 00:04:48.282 16:03:46 -- common/autotest_common.sh@819 -- # '[' -z 2872808 ']' 00:04:48.282 16:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.282 16:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.282 16:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.283 16:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.283 16:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:48.283 16:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.283 16:03:47 -- common/autotest_common.sh@852 -- # return 0 00:04:48.283 16:03:47 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.283 Malloc0 00:04:48.543 16:03:47 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.543 Malloc1 00:04:48.543 16:03:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@12 -- # local i 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.543 16:03:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.803 /dev/nbd0 00:04:48.803 16:03:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.803 16:03:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.803 16:03:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:48.803 16:03:47 -- common/autotest_common.sh@857 -- # local i 00:04:48.803 16:03:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:48.803 16:03:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:48.803 16:03:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:48.803 16:03:47 -- common/autotest_common.sh@861 -- # break 00:04:48.803 16:03:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:48.803 16:03:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:48.803 16:03:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.803 1+0 records in 00:04:48.803 1+0 records out 00:04:48.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206149 s, 19.9 MB/s 00:04:48.803 16:03:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:48.803 16:03:47 -- common/autotest_common.sh@874 -- # size=4096 00:04:48.803 16:03:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:48.803 16:03:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:48.803 16:03:47 -- common/autotest_common.sh@877 -- # return 0 00:04:48.803 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.803 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.803 16:03:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.803 /dev/nbd1 00:04:49.064 16:03:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.065 16:03:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:49.065 16:03:47 -- common/autotest_common.sh@857 -- # local i 00:04:49.065 16:03:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:49.065 16:03:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:49.065 16:03:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:49.065 16:03:47 -- common/autotest_common.sh@861 -- # break 00:04:49.065 16:03:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:49.065 16:03:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:49.065 16:03:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.065 1+0 records in 00:04:49.065 1+0 records out 00:04:49.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290698 s, 14.1 MB/s 00:04:49.065 16:03:47 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:49.065 16:03:47 -- common/autotest_common.sh@874 -- # size=4096 00:04:49.065 16:03:47 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:49.065 16:03:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:49.065 16:03:47 -- common/autotest_common.sh@877 -- # return 0 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.065 { 00:04:49.065 "nbd_device": "/dev/nbd0", 00:04:49.065 "bdev_name": "Malloc0" 00:04:49.065 }, 00:04:49.065 { 00:04:49.065 "nbd_device": "/dev/nbd1", 00:04:49.065 "bdev_name": "Malloc1" 00:04:49.065 } 00:04:49.065 ]' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.065 { 00:04:49.065 "nbd_device": "/dev/nbd0", 00:04:49.065 "bdev_name": "Malloc0" 00:04:49.065 }, 00:04:49.065 { 00:04:49.065 "nbd_device": "/dev/nbd1", 00:04:49.065 "bdev_name": "Malloc1" 00:04:49.065 } 00:04:49.065 ]' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.065 /dev/nbd1' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.065 /dev/nbd1' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.065 256+0 records in 00:04:49.065 256+0 records out 00:04:49.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045843 s, 229 MB/s 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.065 256+0 records in 00:04:49.065 256+0 records out 00:04:49.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150045 s, 69.9 MB/s 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.065 16:03:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.325 256+0 records in 00:04:49.325 256+0 records out 00:04:49.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164849 s, 63.6 MB/s 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@51 -- # local i 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@41 -- # break 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.325 16:03:48 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@41 -- # break 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.586 16:03:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@65 -- # true 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.846 16:03:48 -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.846 16:03:48 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.846 16:03:48 -- event/event.sh@35 -- # sleep 3 00:04:50.419 [2024-04-23 16:03:49.200714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.419 [2024-04-23 16:03:49.290552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.419 [2024-04-23 16:03:49.290556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.679 [2024-04-23 16:03:49.370314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.679 [2024-04-23 16:03:49.370357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.223 16:03:51 -- event/event.sh@38 -- # waitforlisten 2872808 /var/tmp/spdk-nbd.sock 00:04:53.223 16:03:51 -- common/autotest_common.sh@819 -- # '[' -z 2872808 ']' 00:04:53.223 16:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.223 16:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.223 16:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.223 16:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.223 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.223 16:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.223 16:03:51 -- common/autotest_common.sh@852 -- # return 0 00:04:53.223 16:03:51 -- event/event.sh@39 -- # killprocess 2872808 00:04:53.223 16:03:51 -- common/autotest_common.sh@926 -- # '[' -z 2872808 ']' 00:04:53.223 16:03:51 -- common/autotest_common.sh@930 -- # kill -0 2872808 00:04:53.223 16:03:51 -- common/autotest_common.sh@931 -- # uname 00:04:53.223 16:03:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.223 16:03:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2872808 00:04:53.223 16:03:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.223 16:03:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.223 16:03:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2872808' 00:04:53.223 killing process with pid 2872808 00:04:53.223 16:03:51 -- common/autotest_common.sh@945 -- # kill 2872808 00:04:53.223 16:03:51 -- common/autotest_common.sh@950 -- # wait 2872808 00:04:53.483 spdk_app_start is called in Round 0. 00:04:53.483 Shutdown signal received, stop current app iteration 00:04:53.483 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:04:53.483 spdk_app_start is called in Round 1. 00:04:53.483 Shutdown signal received, stop current app iteration 00:04:53.483 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:04:53.483 spdk_app_start is called in Round 2. 00:04:53.483 Shutdown signal received, stop current app iteration 00:04:53.483 Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 reinitialization... 00:04:53.483 spdk_app_start is called in Round 3. 00:04:53.483 Shutdown signal received, stop current app iteration 00:04:53.483 16:03:52 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:53.483 16:03:52 -- event/event.sh@42 -- # return 0 00:04:53.483 00:04:53.483 real 0m15.827s 00:04:53.483 user 0m33.015s 00:04:53.483 sys 0m2.277s 00:04:53.483 16:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.483 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:53.483 ************************************ 00:04:53.483 END TEST app_repeat 00:04:53.483 ************************************ 00:04:53.483 16:03:52 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:53.483 16:03:52 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:53.483 16:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.483 16:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.483 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:53.743 ************************************ 00:04:53.743 START TEST cpu_locks 00:04:53.743 ************************************ 00:04:53.743 16:03:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:53.743 * Looking for test storage... 00:04:53.743 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:53.743 16:03:52 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:53.743 16:03:52 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:53.743 16:03:52 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:53.743 16:03:52 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:53.743 16:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.744 16:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.744 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:53.744 ************************************ 00:04:53.744 START TEST default_locks 00:04:53.744 ************************************ 00:04:53.744 16:03:52 -- common/autotest_common.sh@1104 -- # default_locks 00:04:53.744 16:03:52 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2876165 00:04:53.744 16:03:52 -- event/cpu_locks.sh@47 -- # waitforlisten 2876165 00:04:53.744 16:03:52 -- common/autotest_common.sh@819 -- # '[' -z 2876165 ']' 00:04:53.744 16:03:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.744 16:03:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.744 16:03:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.744 16:03:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.744 16:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:53.744 16:03:52 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.744 [2024-04-23 16:03:52.573841] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:53.744 [2024-04-23 16:03:52.573976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876165 ] 00:04:53.744 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.004 [2024-04-23 16:03:52.692607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.004 [2024-04-23 16:03:52.790006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.004 [2024-04-23 16:03:52.790192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.577 16:03:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:54.577 16:03:53 -- common/autotest_common.sh@852 -- # return 0 00:04:54.577 16:03:53 -- event/cpu_locks.sh@49 -- # locks_exist 2876165 00:04:54.577 16:03:53 -- event/cpu_locks.sh@22 -- # lslocks -p 2876165 00:04:54.577 16:03:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.577 lslocks: write error 00:04:54.577 16:03:53 -- event/cpu_locks.sh@50 -- # killprocess 2876165 00:04:54.577 16:03:53 -- common/autotest_common.sh@926 -- # '[' -z 2876165 ']' 00:04:54.577 16:03:53 -- common/autotest_common.sh@930 -- # kill -0 2876165 00:04:54.577 16:03:53 -- common/autotest_common.sh@931 -- # uname 00:04:54.577 16:03:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:54.577 16:03:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876165 00:04:54.838 16:03:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:54.838 16:03:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:54.838 16:03:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876165' 00:04:54.838 killing process with pid 2876165 00:04:54.838 16:03:53 -- common/autotest_common.sh@945 -- # kill 2876165 00:04:54.838 16:03:53 -- common/autotest_common.sh@950 -- # wait 2876165 00:04:55.779 16:03:54 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2876165 00:04:55.779 16:03:54 -- common/autotest_common.sh@640 -- # local es=0 00:04:55.779 16:03:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2876165 00:04:55.779 16:03:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:55.779 16:03:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.779 16:03:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:55.779 16:03:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.779 16:03:54 -- common/autotest_common.sh@643 -- # waitforlisten 2876165 00:04:55.779 16:03:54 -- common/autotest_common.sh@819 -- # '[' -z 2876165 ']' 00:04:55.779 16:03:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.779 16:03:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.779 16:03:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.779 16:03:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.779 16:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2876165) - No such process 00:04:55.779 ERROR: process (pid: 2876165) is no longer running 00:04:55.779 16:03:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.779 16:03:54 -- common/autotest_common.sh@852 -- # return 1 00:04:55.779 16:03:54 -- common/autotest_common.sh@643 -- # es=1 00:04:55.779 16:03:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:55.779 16:03:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:55.779 16:03:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:55.779 16:03:54 -- event/cpu_locks.sh@54 -- # no_locks 00:04:55.779 16:03:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.779 16:03:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.779 16:03:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.779 00:04:55.779 real 0m1.886s 00:04:55.779 user 0m1.850s 00:04:55.779 sys 0m0.525s 00:04:55.779 16:03:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.779 16:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 ************************************ 00:04:55.779 END TEST default_locks 00:04:55.779 ************************************ 00:04:55.779 16:03:54 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:55.779 16:03:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.779 16:03:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.779 16:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 ************************************ 00:04:55.779 START TEST default_locks_via_rpc 00:04:55.779 ************************************ 00:04:55.779 16:03:54 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:55.779 16:03:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2876657 00:04:55.779 16:03:54 -- event/cpu_locks.sh@63 -- # waitforlisten 2876657 00:04:55.779 16:03:54 -- common/autotest_common.sh@819 -- # '[' -z 2876657 ']' 00:04:55.779 16:03:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.779 16:03:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:55.779 16:03:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.779 16:03:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:55.779 16:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.779 16:03:54 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.779 [2024-04-23 16:03:54.480140] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:55.779 [2024-04-23 16:03:54.480231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876657 ] 00:04:55.779 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.779 [2024-04-23 16:03:54.568438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.779 [2024-04-23 16:03:54.663051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.779 [2024-04-23 16:03:54.663234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.351 16:03:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.351 16:03:55 -- common/autotest_common.sh@852 -- # return 0 00:04:56.351 16:03:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:56.351 16:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.351 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.351 16:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.351 16:03:55 -- event/cpu_locks.sh@67 -- # no_locks 00:04:56.351 16:03:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:56.351 16:03:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:56.351 16:03:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:56.351 16:03:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:56.351 16:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:56.351 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:04:56.351 16:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:56.351 16:03:55 -- event/cpu_locks.sh@71 -- # locks_exist 2876657 00:04:56.351 16:03:55 -- event/cpu_locks.sh@22 -- # lslocks -p 2876657 00:04:56.351 16:03:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.611 16:03:55 -- event/cpu_locks.sh@73 -- # killprocess 2876657 00:04:56.611 16:03:55 -- common/autotest_common.sh@926 -- # '[' -z 2876657 ']' 00:04:56.611 16:03:55 -- common/autotest_common.sh@930 -- # kill -0 2876657 00:04:56.611 16:03:55 -- common/autotest_common.sh@931 -- # uname 00:04:56.611 16:03:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.611 16:03:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876657 00:04:56.611 16:03:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.611 16:03:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.611 16:03:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876657' 00:04:56.611 killing process with pid 2876657 00:04:56.611 16:03:55 -- common/autotest_common.sh@945 -- # kill 2876657 00:04:56.611 16:03:55 -- common/autotest_common.sh@950 -- # wait 2876657 00:04:57.553 00:04:57.553 real 0m1.814s 00:04:57.553 user 0m1.734s 00:04:57.553 sys 0m0.497s 00:04:57.553 16:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.553 16:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.553 ************************************ 00:04:57.553 END TEST default_locks_via_rpc 00:04:57.553 ************************************ 00:04:57.553 16:03:56 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:57.553 16:03:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.553 16:03:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.553 16:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.553 ************************************ 00:04:57.553 START TEST non_locking_app_on_locked_coremask 00:04:57.553 ************************************ 00:04:57.553 16:03:56 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:57.553 16:03:56 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2876991 00:04:57.553 16:03:56 -- event/cpu_locks.sh@81 -- # waitforlisten 2876991 /var/tmp/spdk.sock 00:04:57.554 16:03:56 -- common/autotest_common.sh@819 -- # '[' -z 2876991 ']' 00:04:57.554 16:03:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.554 16:03:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.554 16:03:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.554 16:03:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.554 16:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.554 16:03:56 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.554 [2024-04-23 16:03:56.333663] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:57.554 [2024-04-23 16:03:56.333793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876991 ] 00:04:57.554 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.554 [2024-04-23 16:03:56.450355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.815 [2024-04-23 16:03:56.546899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.815 [2024-04-23 16:03:56.547094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.387 16:03:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.387 16:03:57 -- common/autotest_common.sh@852 -- # return 0 00:04:58.387 16:03:57 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2877039 00:04:58.387 16:03:57 -- event/cpu_locks.sh@85 -- # waitforlisten 2877039 /var/tmp/spdk2.sock 00:04:58.387 16:03:57 -- common/autotest_common.sh@819 -- # '[' -z 2877039 ']' 00:04:58.387 16:03:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.387 16:03:57 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:58.387 16:03:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.387 16:03:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.387 16:03:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.387 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.387 [2024-04-23 16:03:57.098801] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:04:58.387 [2024-04-23 16:03:57.098917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877039 ] 00:04:58.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.387 [2024-04-23 16:03:57.251348] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.387 [2024-04-23 16:03:57.251390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.648 [2024-04-23 16:03:57.449700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.648 [2024-04-23 16:03:57.449907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.592 16:03:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.592 16:03:58 -- common/autotest_common.sh@852 -- # return 0 00:04:59.592 16:03:58 -- event/cpu_locks.sh@87 -- # locks_exist 2876991 00:04:59.592 16:03:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.592 16:03:58 -- event/cpu_locks.sh@22 -- # lslocks -p 2876991 00:04:59.852 lslocks: write error 00:04:59.852 16:03:58 -- event/cpu_locks.sh@89 -- # killprocess 2876991 00:04:59.852 16:03:58 -- common/autotest_common.sh@926 -- # '[' -z 2876991 ']' 00:04:59.852 16:03:58 -- common/autotest_common.sh@930 -- # kill -0 2876991 00:04:59.852 16:03:58 -- common/autotest_common.sh@931 -- # uname 00:04:59.852 16:03:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:59.852 16:03:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2876991 00:05:00.112 16:03:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:00.112 16:03:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:00.112 16:03:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2876991' 00:05:00.112 killing process with pid 2876991 00:05:00.112 16:03:58 -- common/autotest_common.sh@945 -- # kill 2876991 00:05:00.112 16:03:58 -- common/autotest_common.sh@950 -- # wait 2876991 00:05:02.026 16:04:00 -- event/cpu_locks.sh@90 -- # killprocess 2877039 00:05:02.026 16:04:00 -- common/autotest_common.sh@926 -- # '[' -z 2877039 ']' 00:05:02.026 16:04:00 -- common/autotest_common.sh@930 -- # kill -0 2877039 00:05:02.027 16:04:00 -- common/autotest_common.sh@931 -- # uname 00:05:02.027 16:04:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.027 16:04:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2877039 00:05:02.027 16:04:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:02.027 16:04:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:02.027 16:04:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2877039' 00:05:02.027 killing process with pid 2877039 00:05:02.027 16:04:00 -- common/autotest_common.sh@945 -- # kill 2877039 00:05:02.027 16:04:00 -- common/autotest_common.sh@950 -- # wait 2877039 00:05:02.599 00:05:02.599 real 0m5.126s 00:05:02.599 user 0m5.227s 00:05:02.599 sys 0m1.053s 00:05:02.599 16:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.599 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 ************************************ 00:05:02.599 END TEST non_locking_app_on_locked_coremask 00:05:02.599 ************************************ 00:05:02.599 16:04:01 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:02.599 16:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.599 16:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.599 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 ************************************ 00:05:02.599 START TEST locking_app_on_unlocked_coremask 00:05:02.599 ************************************ 00:05:02.599 16:04:01 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:02.599 16:04:01 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2877941 00:05:02.599 16:04:01 -- event/cpu_locks.sh@99 -- # waitforlisten 2877941 /var/tmp/spdk.sock 00:05:02.599 16:04:01 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:02.599 16:04:01 -- common/autotest_common.sh@819 -- # '[' -z 2877941 ']' 00:05:02.599 16:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.599 16:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:02.599 16:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.599 16:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:02.599 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.599 [2024-04-23 16:04:01.489205] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:02.599 [2024-04-23 16:04:01.489332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877941 ] 00:05:02.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.858 [2024-04-23 16:04:01.608982] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.858 [2024-04-23 16:04:01.609020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.858 [2024-04-23 16:04:01.707114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.858 [2024-04-23 16:04:01.707297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.428 16:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:03.428 16:04:02 -- common/autotest_common.sh@852 -- # return 0 00:05:03.428 16:04:02 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2878236 00:05:03.428 16:04:02 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.428 16:04:02 -- event/cpu_locks.sh@103 -- # waitforlisten 2878236 /var/tmp/spdk2.sock 00:05:03.428 16:04:02 -- common/autotest_common.sh@819 -- # '[' -z 2878236 ']' 00:05:03.428 16:04:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.428 16:04:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.428 16:04:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.428 16:04:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.429 16:04:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.429 [2024-04-23 16:04:02.253193] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:03.429 [2024-04-23 16:04:02.253307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878236 ] 00:05:03.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.691 [2024-04-23 16:04:02.404433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.691 [2024-04-23 16:04:02.596922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:03.691 [2024-04-23 16:04:02.597114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.079 16:04:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.079 16:04:03 -- common/autotest_common.sh@852 -- # return 0 00:05:05.079 16:04:03 -- event/cpu_locks.sh@105 -- # locks_exist 2878236 00:05:05.079 16:04:03 -- event/cpu_locks.sh@22 -- # lslocks -p 2878236 00:05:05.079 16:04:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.079 lslocks: write error 00:05:05.079 16:04:03 -- event/cpu_locks.sh@107 -- # killprocess 2877941 00:05:05.079 16:04:03 -- common/autotest_common.sh@926 -- # '[' -z 2877941 ']' 00:05:05.079 16:04:03 -- common/autotest_common.sh@930 -- # kill -0 2877941 00:05:05.079 16:04:03 -- common/autotest_common.sh@931 -- # uname 00:05:05.079 16:04:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:05.079 16:04:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2877941 00:05:05.079 16:04:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:05.079 16:04:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:05.079 16:04:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2877941' 00:05:05.079 killing process with pid 2877941 00:05:05.079 16:04:03 -- common/autotest_common.sh@945 -- # kill 2877941 00:05:05.079 16:04:03 -- common/autotest_common.sh@950 -- # wait 2877941 00:05:06.996 16:04:05 -- event/cpu_locks.sh@108 -- # killprocess 2878236 00:05:06.996 16:04:05 -- common/autotest_common.sh@926 -- # '[' -z 2878236 ']' 00:05:06.996 16:04:05 -- common/autotest_common.sh@930 -- # kill -0 2878236 00:05:06.996 16:04:05 -- common/autotest_common.sh@931 -- # uname 00:05:06.996 16:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.996 16:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2878236 00:05:06.996 16:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.996 16:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.996 16:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2878236' 00:05:06.996 killing process with pid 2878236 00:05:06.996 16:04:05 -- common/autotest_common.sh@945 -- # kill 2878236 00:05:06.996 16:04:05 -- common/autotest_common.sh@950 -- # wait 2878236 00:05:07.567 00:05:07.568 real 0m5.073s 00:05:07.568 user 0m5.215s 00:05:07.568 sys 0m1.024s 00:05:07.568 16:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.568 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.568 ************************************ 00:05:07.568 END TEST locking_app_on_unlocked_coremask 00:05:07.568 ************************************ 00:05:07.829 16:04:06 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.829 16:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.829 16:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.829 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.829 ************************************ 00:05:07.829 START TEST locking_app_on_locked_coremask 00:05:07.829 ************************************ 00:05:07.829 16:04:06 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:07.829 16:04:06 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2879014 00:05:07.829 16:04:06 -- event/cpu_locks.sh@116 -- # waitforlisten 2879014 /var/tmp/spdk.sock 00:05:07.829 16:04:06 -- common/autotest_common.sh@819 -- # '[' -z 2879014 ']' 00:05:07.829 16:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.829 16:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:07.829 16:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.829 16:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:07.829 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.829 16:04:06 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.829 [2024-04-23 16:04:06.626592] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:07.829 [2024-04-23 16:04:06.626762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879014 ] 00:05:07.829 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.092 [2024-04-23 16:04:06.765064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.092 [2024-04-23 16:04:06.861978] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.092 [2024-04-23 16:04:06.862204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.664 16:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:08.664 16:04:07 -- common/autotest_common.sh@852 -- # return 0 00:05:08.664 16:04:07 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2879182 00:05:08.664 16:04:07 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2879182 /var/tmp/spdk2.sock 00:05:08.664 16:04:07 -- common/autotest_common.sh@640 -- # local es=0 00:05:08.664 16:04:07 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2879182 /var/tmp/spdk2.sock 00:05:08.664 16:04:07 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.664 16:04:07 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:08.664 16:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.664 16:04:07 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:08.664 16:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:08.664 16:04:07 -- common/autotest_common.sh@643 -- # waitforlisten 2879182 /var/tmp/spdk2.sock 00:05:08.664 16:04:07 -- common/autotest_common.sh@819 -- # '[' -z 2879182 ']' 00:05:08.664 16:04:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.664 16:04:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.664 16:04:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.664 16:04:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.664 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.664 [2024-04-23 16:04:07.421095] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:08.664 [2024-04-23 16:04:07.421244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879182 ] 00:05:08.664 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.664 [2024-04-23 16:04:07.585760] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2879014 has claimed it. 00:05:08.664 [2024-04-23 16:04:07.585813] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:09.234 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2879182) - No such process 00:05:09.234 ERROR: process (pid: 2879182) is no longer running 00:05:09.234 16:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.234 16:04:07 -- common/autotest_common.sh@852 -- # return 1 00:05:09.234 16:04:07 -- common/autotest_common.sh@643 -- # es=1 00:05:09.234 16:04:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:09.234 16:04:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:09.234 16:04:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:09.234 16:04:07 -- event/cpu_locks.sh@122 -- # locks_exist 2879014 00:05:09.234 16:04:07 -- event/cpu_locks.sh@22 -- # lslocks -p 2879014 00:05:09.234 16:04:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.234 lslocks: write error 00:05:09.234 16:04:08 -- event/cpu_locks.sh@124 -- # killprocess 2879014 00:05:09.234 16:04:08 -- common/autotest_common.sh@926 -- # '[' -z 2879014 ']' 00:05:09.234 16:04:08 -- common/autotest_common.sh@930 -- # kill -0 2879014 00:05:09.234 16:04:08 -- common/autotest_common.sh@931 -- # uname 00:05:09.234 16:04:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.234 16:04:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2879014 00:05:09.495 16:04:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.495 16:04:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.495 16:04:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2879014' 00:05:09.495 killing process with pid 2879014 00:05:09.495 16:04:08 -- common/autotest_common.sh@945 -- # kill 2879014 00:05:09.495 16:04:08 -- common/autotest_common.sh@950 -- # wait 2879014 00:05:10.439 00:05:10.439 real 0m2.502s 00:05:10.439 user 0m2.562s 00:05:10.439 sys 0m0.723s 00:05:10.439 16:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.439 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.439 ************************************ 00:05:10.439 END TEST locking_app_on_locked_coremask 00:05:10.439 ************************************ 00:05:10.439 16:04:09 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:10.439 16:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.439 16:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.439 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.439 ************************************ 00:05:10.439 START TEST locking_overlapped_coremask 00:05:10.439 ************************************ 00:05:10.439 16:04:09 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:10.439 16:04:09 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2879513 00:05:10.439 16:04:09 -- event/cpu_locks.sh@133 -- # waitforlisten 2879513 /var/tmp/spdk.sock 00:05:10.439 16:04:09 -- common/autotest_common.sh@819 -- # '[' -z 2879513 ']' 00:05:10.439 16:04:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.439 16:04:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.439 16:04:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.439 16:04:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.439 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:10.439 16:04:09 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:10.439 [2024-04-23 16:04:09.140673] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:10.439 [2024-04-23 16:04:09.140787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879513 ] 00:05:10.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.439 [2024-04-23 16:04:09.240338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.439 [2024-04-23 16:04:09.341296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.440 [2024-04-23 16:04:09.341519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.440 [2024-04-23 16:04:09.341624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.440 [2024-04-23 16:04:09.341640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.009 16:04:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.009 16:04:09 -- common/autotest_common.sh@852 -- # return 0 00:05:11.009 16:04:09 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2879808 00:05:11.009 16:04:09 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2879808 /var/tmp/spdk2.sock 00:05:11.009 16:04:09 -- common/autotest_common.sh@640 -- # local es=0 00:05:11.009 16:04:09 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2879808 /var/tmp/spdk2.sock 00:05:11.009 16:04:09 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:11.009 16:04:09 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.009 16:04:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:11.009 16:04:09 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:11.009 16:04:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:11.009 16:04:09 -- common/autotest_common.sh@643 -- # waitforlisten 2879808 /var/tmp/spdk2.sock 00:05:11.009 16:04:09 -- common/autotest_common.sh@819 -- # '[' -z 2879808 ']' 00:05:11.009 16:04:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.009 16:04:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.009 16:04:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.009 16:04:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.009 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:11.009 [2024-04-23 16:04:09.917357] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:11.009 [2024-04-23 16:04:09.917450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879808 ] 00:05:11.269 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.269 [2024-04-23 16:04:10.053690] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2879513 has claimed it. 00:05:11.269 [2024-04-23 16:04:10.053740] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.843 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2879808) - No such process 00:05:11.843 ERROR: process (pid: 2879808) is no longer running 00:05:11.843 16:04:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.843 16:04:10 -- common/autotest_common.sh@852 -- # return 1 00:05:11.843 16:04:10 -- common/autotest_common.sh@643 -- # es=1 00:05:11.843 16:04:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:11.843 16:04:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:11.843 16:04:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:11.843 16:04:10 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:11.843 16:04:10 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.843 16:04:10 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.843 16:04:10 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.843 16:04:10 -- event/cpu_locks.sh@141 -- # killprocess 2879513 00:05:11.843 16:04:10 -- common/autotest_common.sh@926 -- # '[' -z 2879513 ']' 00:05:11.843 16:04:10 -- common/autotest_common.sh@930 -- # kill -0 2879513 00:05:11.843 16:04:10 -- common/autotest_common.sh@931 -- # uname 00:05:11.843 16:04:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.843 16:04:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2879513 00:05:11.843 16:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.843 16:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.843 16:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2879513' 00:05:11.843 killing process with pid 2879513 00:05:11.843 16:04:10 -- common/autotest_common.sh@945 -- # kill 2879513 00:05:11.843 16:04:10 -- common/autotest_common.sh@950 -- # wait 2879513 00:05:12.787 00:05:12.787 real 0m2.308s 00:05:12.787 user 0m5.999s 00:05:12.787 sys 0m0.544s 00:05:12.787 16:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.787 16:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.787 ************************************ 00:05:12.787 END TEST locking_overlapped_coremask 00:05:12.787 ************************************ 00:05:12.787 16:04:11 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:12.787 16:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.787 16:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.787 16:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.787 ************************************ 00:05:12.787 START TEST locking_overlapped_coremask_via_rpc 00:05:12.788 ************************************ 00:05:12.788 16:04:11 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:12.788 16:04:11 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2880139 00:05:12.788 16:04:11 -- event/cpu_locks.sh@149 -- # waitforlisten 2880139 /var/tmp/spdk.sock 00:05:12.788 16:04:11 -- common/autotest_common.sh@819 -- # '[' -z 2880139 ']' 00:05:12.788 16:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.788 16:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.788 16:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.788 16:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.788 16:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:12.788 16:04:11 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:12.788 [2024-04-23 16:04:11.508137] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:12.788 [2024-04-23 16:04:11.508285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880139 ] 00:05:12.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.788 [2024-04-23 16:04:11.642611] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.788 [2024-04-23 16:04:11.642662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.049 [2024-04-23 16:04:11.736268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.049 [2024-04-23 16:04:11.736535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.049 [2024-04-23 16:04:11.736625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.049 [2024-04-23 16:04:11.736638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.310 16:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.310 16:04:12 -- common/autotest_common.sh@852 -- # return 0 00:05:13.310 16:04:12 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2880157 00:05:13.310 16:04:12 -- event/cpu_locks.sh@153 -- # waitforlisten 2880157 /var/tmp/spdk2.sock 00:05:13.310 16:04:12 -- common/autotest_common.sh@819 -- # '[' -z 2880157 ']' 00:05:13.310 16:04:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.310 16:04:12 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:13.310 16:04:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.310 16:04:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.310 16:04:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.310 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 [2024-04-23 16:04:12.306882] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:13.571 [2024-04-23 16:04:12.307022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880157 ] 00:05:13.571 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.571 [2024-04-23 16:04:12.481021] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.571 [2024-04-23 16:04:12.481067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.830 [2024-04-23 16:04:12.679754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.830 [2024-04-23 16:04:12.680018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.830 [2024-04-23 16:04:12.680075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.830 [2024-04-23 16:04:12.680109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:14.852 16:04:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.852 16:04:13 -- common/autotest_common.sh@852 -- # return 0 00:05:14.852 16:04:13 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.852 16:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.852 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.852 16:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.852 16:04:13 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.852 16:04:13 -- common/autotest_common.sh@640 -- # local es=0 00:05:14.852 16:04:13 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.852 16:04:13 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:14.852 16:04:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:14.852 16:04:13 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:14.852 16:04:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:14.852 16:04:13 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.852 16:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.852 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.852 [2024-04-23 16:04:13.691736] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2880139 has claimed it. 00:05:14.852 request: 00:05:14.852 { 00:05:14.852 "method": "framework_enable_cpumask_locks", 00:05:14.852 "req_id": 1 00:05:14.852 } 00:05:14.852 Got JSON-RPC error response 00:05:14.852 response: 00:05:14.852 { 00:05:14.852 "code": -32603, 00:05:14.852 "message": "Failed to claim CPU core: 2" 00:05:14.852 } 00:05:14.852 16:04:13 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:14.852 16:04:13 -- common/autotest_common.sh@643 -- # es=1 00:05:14.852 16:04:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:14.852 16:04:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:14.852 16:04:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:14.852 16:04:13 -- event/cpu_locks.sh@158 -- # waitforlisten 2880139 /var/tmp/spdk.sock 00:05:14.852 16:04:13 -- common/autotest_common.sh@819 -- # '[' -z 2880139 ']' 00:05:14.852 16:04:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.852 16:04:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.852 16:04:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.852 16:04:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.853 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 16:04:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.113 16:04:13 -- common/autotest_common.sh@852 -- # return 0 00:05:15.113 16:04:13 -- event/cpu_locks.sh@159 -- # waitforlisten 2880157 /var/tmp/spdk2.sock 00:05:15.113 16:04:13 -- common/autotest_common.sh@819 -- # '[' -z 2880157 ']' 00:05:15.113 16:04:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.113 16:04:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.113 16:04:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.113 16:04:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.113 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 16:04:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:15.113 16:04:14 -- common/autotest_common.sh@852 -- # return 0 00:05:15.113 16:04:14 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:15.113 16:04:14 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:15.113 16:04:14 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:15.113 16:04:14 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:15.113 00:05:15.113 real 0m2.616s 00:05:15.113 user 0m0.832s 00:05:15.113 sys 0m0.202s 00:05:15.113 16:04:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.113 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 ************************************ 00:05:15.113 END TEST locking_overlapped_coremask_via_rpc 00:05:15.113 ************************************ 00:05:15.374 16:04:14 -- event/cpu_locks.sh@174 -- # cleanup 00:05:15.374 16:04:14 -- event/cpu_locks.sh@15 -- # [[ -z 2880139 ]] 00:05:15.374 16:04:14 -- event/cpu_locks.sh@15 -- # killprocess 2880139 00:05:15.374 16:04:14 -- common/autotest_common.sh@926 -- # '[' -z 2880139 ']' 00:05:15.374 16:04:14 -- common/autotest_common.sh@930 -- # kill -0 2880139 00:05:15.374 16:04:14 -- common/autotest_common.sh@931 -- # uname 00:05:15.374 16:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.374 16:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2880139 00:05:15.374 16:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.374 16:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.374 16:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2880139' 00:05:15.375 killing process with pid 2880139 00:05:15.375 16:04:14 -- common/autotest_common.sh@945 -- # kill 2880139 00:05:15.375 16:04:14 -- common/autotest_common.sh@950 -- # wait 2880139 00:05:16.319 16:04:14 -- event/cpu_locks.sh@16 -- # [[ -z 2880157 ]] 00:05:16.319 16:04:14 -- event/cpu_locks.sh@16 -- # killprocess 2880157 00:05:16.319 16:04:14 -- common/autotest_common.sh@926 -- # '[' -z 2880157 ']' 00:05:16.319 16:04:14 -- common/autotest_common.sh@930 -- # kill -0 2880157 00:05:16.319 16:04:14 -- common/autotest_common.sh@931 -- # uname 00:05:16.319 16:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.319 16:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2880157 00:05:16.319 16:04:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:16.319 16:04:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:16.319 16:04:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2880157' 00:05:16.319 killing process with pid 2880157 00:05:16.319 16:04:15 -- common/autotest_common.sh@945 -- # kill 2880157 00:05:16.319 16:04:15 -- common/autotest_common.sh@950 -- # wait 2880157 00:05:17.261 16:04:15 -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.261 16:04:15 -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.261 16:04:15 -- event/cpu_locks.sh@15 -- # [[ -z 2880139 ]] 00:05:17.261 16:04:15 -- event/cpu_locks.sh@15 -- # killprocess 2880139 00:05:17.261 16:04:15 -- common/autotest_common.sh@926 -- # '[' -z 2880139 ']' 00:05:17.261 16:04:15 -- common/autotest_common.sh@930 -- # kill -0 2880139 00:05:17.261 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2880139) - No such process 00:05:17.261 16:04:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2880139 is not found' 00:05:17.261 Process with pid 2880139 is not found 00:05:17.261 16:04:15 -- event/cpu_locks.sh@16 -- # [[ -z 2880157 ]] 00:05:17.261 16:04:15 -- event/cpu_locks.sh@16 -- # killprocess 2880157 00:05:17.261 16:04:15 -- common/autotest_common.sh@926 -- # '[' -z 2880157 ']' 00:05:17.261 16:04:15 -- common/autotest_common.sh@930 -- # kill -0 2880157 00:05:17.261 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2880157) - No such process 00:05:17.261 16:04:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2880157 is not found' 00:05:17.261 Process with pid 2880157 is not found 00:05:17.261 16:04:15 -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.261 00:05:17.261 real 0m23.449s 00:05:17.261 user 0m39.701s 00:05:17.261 sys 0m5.553s 00:05:17.261 16:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.261 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.261 ************************************ 00:05:17.261 END TEST cpu_locks 00:05:17.261 ************************************ 00:05:17.261 00:05:17.261 real 0m47.148s 00:05:17.261 user 1m24.992s 00:05:17.261 sys 0m8.964s 00:05:17.261 16:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.261 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.261 ************************************ 00:05:17.261 END TEST event 00:05:17.261 ************************************ 00:05:17.261 16:04:15 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:17.261 16:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.261 16:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.261 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.261 ************************************ 00:05:17.261 START TEST thread 00:05:17.261 ************************************ 00:05:17.261 16:04:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:17.261 * Looking for test storage... 00:05:17.261 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:05:17.261 16:04:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.261 16:04:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:17.261 16:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.261 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.261 ************************************ 00:05:17.261 START TEST thread_poller_perf 00:05:17.261 ************************************ 00:05:17.261 16:04:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.261 [2024-04-23 16:04:16.040673] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:17.261 [2024-04-23 16:04:16.040823] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881147 ] 00:05:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.261 [2024-04-23 16:04:16.173789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.523 [2024-04-23 16:04:16.269928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.523 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:18.909 ====================================== 00:05:18.909 busy:1908037620 (cyc) 00:05:18.909 total_run_count: 383000 00:05:18.909 tsc_hz: 1900000000 (cyc) 00:05:18.909 ====================================== 00:05:18.909 poller_cost: 4981 (cyc), 2621 (nsec) 00:05:18.909 00:05:18.909 real 0m1.443s 00:05:18.909 user 0m1.274s 00:05:18.909 sys 0m0.158s 00:05:18.909 16:04:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.909 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.909 ************************************ 00:05:18.909 END TEST thread_poller_perf 00:05:18.909 ************************************ 00:05:18.909 16:04:17 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.909 16:04:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:18.909 16:04:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.909 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.909 ************************************ 00:05:18.909 START TEST thread_poller_perf 00:05:18.909 ************************************ 00:05:18.909 16:04:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.909 [2024-04-23 16:04:17.522203] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:18.909 [2024-04-23 16:04:17.522341] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881468 ] 00:05:18.909 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.909 [2024-04-23 16:04:17.653293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.909 [2024-04-23 16:04:17.748706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.909 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.292 ====================================== 00:05:20.292 busy:1902287178 (cyc) 00:05:20.292 total_run_count: 5270000 00:05:20.292 tsc_hz: 1900000000 (cyc) 00:05:20.292 ====================================== 00:05:20.292 poller_cost: 360 (cyc), 189 (nsec) 00:05:20.293 00:05:20.293 real 0m1.426s 00:05:20.293 user 0m1.266s 00:05:20.293 sys 0m0.151s 00:05:20.293 16:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.293 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 ************************************ 00:05:20.293 END TEST thread_poller_perf 00:05:20.293 ************************************ 00:05:20.293 16:04:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.293 00:05:20.293 real 0m3.005s 00:05:20.293 user 0m2.585s 00:05:20.293 sys 0m0.420s 00:05:20.293 16:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.293 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 ************************************ 00:05:20.293 END TEST thread 00:05:20.293 ************************************ 00:05:20.293 16:04:18 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:20.293 16:04:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.293 16:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.293 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 ************************************ 00:05:20.293 START TEST accel 00:05:20.293 ************************************ 00:05:20.293 16:04:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:20.293 * Looking for test storage... 00:05:20.293 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:05:20.293 16:04:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:20.293 16:04:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:20.293 16:04:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.293 16:04:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=2881818 00:05:20.293 16:04:19 -- accel/accel.sh@60 -- # waitforlisten 2881818 00:05:20.293 16:04:19 -- common/autotest_common.sh@819 -- # '[' -z 2881818 ']' 00:05:20.293 16:04:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.293 16:04:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.293 16:04:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.293 16:04:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.293 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.293 16:04:19 -- accel/accel.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:20.293 16:04:19 -- accel/accel.sh@58 -- # build_accel_config 00:05:20.293 16:04:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.293 16:04:19 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:20.293 16:04:19 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:20.293 16:04:19 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:20.293 16:04:19 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:20.293 16:04:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.293 16:04:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.293 16:04:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.293 16:04:19 -- accel/accel.sh@42 -- # jq -r . 00:05:20.293 [2024-04-23 16:04:19.144470] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:20.293 [2024-04-23 16:04:19.144618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881818 ] 00:05:20.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.554 [2024-04-23 16:04:19.274881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.554 [2024-04-23 16:04:19.365768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.554 [2024-04-23 16:04:19.365979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.554 [2024-04-23 16:04:19.370541] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:20.554 [2024-04-23 16:04:19.378488] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:28.696 16:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.696 16:04:26 -- common/autotest_common.sh@852 -- # return 0 00:05:28.696 16:04:26 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:28.696 16:04:26 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:28.696 16:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.696 16:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.696 16:04:26 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:28.697 16:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=iaa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:28.697 16:04:26 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # IFS== 00:05:28.697 16:04:26 -- accel/accel.sh@64 -- # read -r opc module 00:05:28.697 16:04:26 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dsa 00:05:28.697 16:04:26 -- accel/accel.sh@67 -- # killprocess 2881818 00:05:28.697 16:04:26 -- common/autotest_common.sh@926 -- # '[' -z 2881818 ']' 00:05:28.697 16:04:26 -- common/autotest_common.sh@930 -- # kill -0 2881818 00:05:28.697 16:04:26 -- common/autotest_common.sh@931 -- # uname 00:05:28.697 16:04:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.697 16:04:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2881818 00:05:28.697 16:04:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.697 16:04:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.697 16:04:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2881818' 00:05:28.697 killing process with pid 2881818 00:05:28.697 16:04:27 -- common/autotest_common.sh@945 -- # kill 2881818 00:05:28.697 16:04:27 -- common/autotest_common.sh@950 -- # wait 2881818 00:05:31.247 16:04:29 -- accel/accel.sh@68 -- # trap - ERR 00:05:31.247 16:04:29 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:31.247 16:04:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:31.247 16:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.247 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 16:04:29 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:31.247 16:04:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:31.247 16:04:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.247 16:04:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.247 16:04:29 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:31.247 16:04:29 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:31.247 16:04:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.247 16:04:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.247 16:04:29 -- accel/accel.sh@42 -- # jq -r . 00:05:31.247 16:04:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.247 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 16:04:29 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:31.247 16:04:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:31.247 16:04:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.247 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 ************************************ 00:05:31.247 START TEST accel_missing_filename 00:05:31.247 ************************************ 00:05:31.247 16:04:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:31.247 16:04:29 -- common/autotest_common.sh@640 -- # local es=0 00:05:31.247 16:04:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:31.247 16:04:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:31.247 16:04:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:31.247 16:04:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:31.247 16:04:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:31.247 16:04:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:31.247 16:04:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:31.247 16:04:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.247 16:04:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.247 16:04:29 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:31.247 16:04:29 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:31.247 16:04:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.247 16:04:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.247 16:04:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.247 16:04:29 -- accel/accel.sh@42 -- # jq -r . 00:05:31.247 [2024-04-23 16:04:29.970947] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:31.247 [2024-04-23 16:04:29.971077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883977 ] 00:05:31.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.247 [2024-04-23 16:04:30.091404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.508 [2024-04-23 16:04:30.193031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.508 [2024-04-23 16:04:30.197603] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:31.508 [2024-04-23 16:04:30.205567] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:38.098 [2024-04-23 16:04:36.616618] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.016 [2024-04-23 16:04:38.492398] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:40.016 A filename is required. 00:05:40.016 16:04:38 -- common/autotest_common.sh@643 -- # es=234 00:05:40.016 16:04:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:40.016 16:04:38 -- common/autotest_common.sh@652 -- # es=106 00:05:40.016 16:04:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:40.016 16:04:38 -- common/autotest_common.sh@660 -- # es=1 00:05:40.016 16:04:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:40.016 00:05:40.016 real 0m8.739s 00:05:40.016 user 0m2.318s 00:05:40.016 sys 0m0.246s 00:05:40.016 16:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.016 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.016 ************************************ 00:05:40.016 END TEST accel_missing_filename 00:05:40.016 ************************************ 00:05:40.016 16:04:38 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:40.016 16:04:38 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:40.016 16:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.016 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:05:40.016 ************************************ 00:05:40.016 START TEST accel_compress_verify 00:05:40.016 ************************************ 00:05:40.016 16:04:38 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:40.016 16:04:38 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.016 16:04:38 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:40.016 16:04:38 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:40.016 16:04:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.016 16:04:38 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:40.016 16:04:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.016 16:04:38 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:40.016 16:04:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:40.016 16:04:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.016 16:04:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.016 16:04:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:40.016 16:04:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:40.016 16:04:38 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:40.016 16:04:38 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:40.016 16:04:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.016 16:04:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.016 16:04:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.016 16:04:38 -- accel/accel.sh@42 -- # jq -r . 00:05:40.016 [2024-04-23 16:04:38.740981] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:40.016 [2024-04-23 16:04:38.741101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885785 ] 00:05:40.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.016 [2024-04-23 16:04:38.854929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.278 [2024-04-23 16:04:38.950866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.278 [2024-04-23 16:04:38.955408] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:40.278 [2024-04-23 16:04:38.963375] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:46.872 [2024-04-23 16:04:45.357495] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.786 [2024-04-23 16:04:47.211277] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:48.786 00:05:48.786 Compression does not support the verify option, aborting. 00:05:48.786 16:04:47 -- common/autotest_common.sh@643 -- # es=161 00:05:48.786 16:04:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.786 16:04:47 -- common/autotest_common.sh@652 -- # es=33 00:05:48.786 16:04:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:48.786 16:04:47 -- common/autotest_common.sh@660 -- # es=1 00:05:48.786 16:04:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.786 00:05:48.786 real 0m8.702s 00:05:48.786 user 0m2.311s 00:05:48.786 sys 0m0.238s 00:05:48.786 16:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 END TEST accel_compress_verify 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:48.786 16:04:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:48.786 16:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 START TEST accel_wrong_workload 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:48.786 16:04:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:48.786 16:04:47 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:48.786 16:04:47 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.786 16:04:47 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.786 16:04:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:48.786 16:04:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:48.786 16:04:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.786 16:04:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.786 16:04:47 -- accel/accel.sh@42 -- # jq -r . 00:05:48.786 Unsupported workload type: foobar 00:05:48.786 [2024-04-23 16:04:47.487390] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:48.786 accel_perf options: 00:05:48.786 [-h help message] 00:05:48.786 [-q queue depth per core] 00:05:48.786 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.786 [-T number of threads per core 00:05:48.786 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.786 [-t time in seconds] 00:05:48.786 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.786 [ dif_verify, , dif_generate, dif_generate_copy 00:05:48.786 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.786 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.786 [-S for crc32c workload, use this seed value (default 0) 00:05:48.786 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.786 [-f for fill workload, use this BYTE value (default 255) 00:05:48.786 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.786 [-y verify result if this switch is on] 00:05:48.786 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.786 Can be used to spread operations across a wider range of memory. 00:05:48.786 16:04:47 -- common/autotest_common.sh@643 -- # es=1 00:05:48.786 16:04:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.786 16:04:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:48.786 16:04:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.786 00:05:48.786 real 0m0.064s 00:05:48.786 user 0m0.056s 00:05:48.786 sys 0m0.039s 00:05:48.786 16:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 END TEST accel_wrong_workload 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.786 16:04:47 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:48.786 16:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 START TEST accel_negative_buffers 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:48.786 16:04:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:48.786 16:04:47 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:48.786 16:04:47 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:48.786 16:04:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.786 16:04:47 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.786 16:04:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:48.786 16:04:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:48.786 16:04:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.786 16:04:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.786 16:04:47 -- accel/accel.sh@42 -- # jq -r . 00:05:48.786 -x option must be non-negative. 00:05:48.786 [2024-04-23 16:04:47.589799] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:48.786 accel_perf options: 00:05:48.786 [-h help message] 00:05:48.786 [-q queue depth per core] 00:05:48.786 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:48.786 [-T number of threads per core 00:05:48.786 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:48.786 [-t time in seconds] 00:05:48.786 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:48.786 [ dif_verify, , dif_generate, dif_generate_copy 00:05:48.786 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:48.786 [-l for compress/decompress workloads, name of uncompressed input file 00:05:48.786 [-S for crc32c workload, use this seed value (default 0) 00:05:48.786 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:48.786 [-f for fill workload, use this BYTE value (default 255) 00:05:48.786 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:48.786 [-y verify result if this switch is on] 00:05:48.786 [-a tasks to allocate per core (default: same value as -q)] 00:05:48.786 Can be used to spread operations across a wider range of memory. 00:05:48.786 16:04:47 -- common/autotest_common.sh@643 -- # es=1 00:05:48.786 16:04:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.786 16:04:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:48.786 16:04:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.786 00:05:48.786 real 0m0.060s 00:05:48.786 user 0m0.053s 00:05:48.786 sys 0m0.039s 00:05:48.786 16:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 END TEST accel_negative_buffers 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:48.786 16:04:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:48.786 16:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.786 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 START TEST accel_crc32c 00:05:48.786 ************************************ 00:05:48.786 16:04:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:48.786 16:04:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.786 16:04:47 -- accel/accel.sh@17 -- # local accel_module 00:05:48.786 16:04:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:48.786 16:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.786 16:04:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:48.786 16:04:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:48.786 16:04:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:48.787 16:04:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:48.787 16:04:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.787 16:04:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.787 16:04:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.787 16:04:47 -- accel/accel.sh@42 -- # jq -r . 00:05:48.787 [2024-04-23 16:04:47.687248] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:48.787 [2024-04-23 16:04:47.687367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887632 ] 00:05:49.047 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.047 [2024-04-23 16:04:47.817390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.047 [2024-04-23 16:04:47.912869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.047 [2024-04-23 16:04:47.917478] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:49.047 [2024-04-23 16:04:47.925427] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:59.051 16:04:57 -- accel/accel.sh@18 -- # out=' 00:05:59.051 SPDK Configuration: 00:05:59.051 Core mask: 0x1 00:05:59.051 00:05:59.051 Accel Perf Configuration: 00:05:59.051 Workload Type: crc32c 00:05:59.051 CRC-32C seed: 32 00:05:59.051 Transfer size: 4096 bytes 00:05:59.051 Vector count 1 00:05:59.051 Module: dsa 00:05:59.051 Queue depth: 32 00:05:59.051 Allocate depth: 32 00:05:59.051 # threads/core: 1 00:05:59.051 Run time: 1 seconds 00:05:59.051 Verify: Yes 00:05:59.051 00:05:59.051 Running for 1 seconds... 00:05:59.051 00:05:59.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.051 ------------------------------------------------------------------------------------ 00:05:59.051 0,0 343264/s 1340 MiB/s 0 0 00:05:59.051 ==================================================================================== 00:05:59.051 Total 343264/s 1340 MiB/s 0 0' 00:05:59.051 16:04:57 -- accel/accel.sh@20 -- # IFS=: 00:05:59.051 16:04:57 -- accel/accel.sh@20 -- # read -r var val 00:05:59.051 16:04:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:59.051 16:04:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:59.051 16:04:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.052 16:04:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.052 16:04:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:59.052 16:04:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:59.052 16:04:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:05:59.052 16:04:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:59.052 16:04:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.052 16:04:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.052 16:04:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.052 16:04:57 -- accel/accel.sh@42 -- # jq -r . 00:05:59.052 [2024-04-23 16:04:57.390546] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:05:59.052 [2024-04-23 16:04:57.390674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889438 ] 00:05:59.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.052 [2024-04-23 16:04:57.505588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.052 [2024-04-23 16:04:57.604590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.052 [2024-04-23 16:04:57.609146] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:59.052 [2024-04-23 16:04:57.617125] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=0x1 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=crc32c 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=32 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=dsa 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=32 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=32 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=1 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val=Yes 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:05.647 16:05:04 -- accel/accel.sh@21 -- # val= 00:06:05.647 16:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:05.647 16:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@21 -- # val= 00:06:08.209 16:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # IFS=: 00:06:08.209 16:05:07 -- accel/accel.sh@20 -- # read -r var val 00:06:08.209 16:05:07 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:08.209 16:05:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:08.209 16:05:07 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:08.209 00:06:08.209 real 0m19.427s 00:06:08.209 user 0m6.571s 00:06:08.209 sys 0m0.501s 00:06:08.209 16:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.209 16:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.209 ************************************ 00:06:08.209 END TEST accel_crc32c 00:06:08.209 ************************************ 00:06:08.209 16:05:07 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:08.209 16:05:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:08.209 16:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.209 16:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.209 ************************************ 00:06:08.209 START TEST accel_crc32c_C2 00:06:08.209 ************************************ 00:06:08.209 16:05:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:08.209 16:05:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.209 16:05:07 -- accel/accel.sh@17 -- # local accel_module 00:06:08.209 16:05:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:08.209 16:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:08.209 16:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.209 16:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.209 16:05:07 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:08.209 16:05:07 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:08.209 16:05:07 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:08.209 16:05:07 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:08.209 16:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.209 16:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.209 16:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.209 16:05:07 -- accel/accel.sh@42 -- # jq -r . 00:06:08.471 [2024-04-23 16:05:07.142669] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:08.471 [2024-04-23 16:05:07.142801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891412 ] 00:06:08.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.471 [2024-04-23 16:05:07.261371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.471 [2024-04-23 16:05:07.359303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.471 [2024-04-23 16:05:07.363874] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:08.471 [2024-04-23 16:05:07.371837] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:18.482 16:05:16 -- accel/accel.sh@18 -- # out=' 00:06:18.482 SPDK Configuration: 00:06:18.482 Core mask: 0x1 00:06:18.482 00:06:18.482 Accel Perf Configuration: 00:06:18.482 Workload Type: crc32c 00:06:18.482 CRC-32C seed: 0 00:06:18.482 Transfer size: 4096 bytes 00:06:18.482 Vector count 2 00:06:18.482 Module: dsa 00:06:18.482 Queue depth: 32 00:06:18.482 Allocate depth: 32 00:06:18.482 # threads/core: 1 00:06:18.482 Run time: 1 seconds 00:06:18.482 Verify: Yes 00:06:18.482 00:06:18.482 Running for 1 seconds... 00:06:18.482 00:06:18.482 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.482 ------------------------------------------------------------------------------------ 00:06:18.482 0,0 244220/s 1907 MiB/s 0 0 00:06:18.482 ==================================================================================== 00:06:18.482 Total 244220/s 953 MiB/s 0 0' 00:06:18.482 16:05:16 -- accel/accel.sh@20 -- # IFS=: 00:06:18.482 16:05:16 -- accel/accel.sh@20 -- # read -r var val 00:06:18.483 16:05:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.483 16:05:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.483 16:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.483 16:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.483 16:05:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:18.483 16:05:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:18.483 16:05:16 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:18.483 16:05:16 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:18.483 16:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.483 16:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.483 16:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.483 16:05:16 -- accel/accel.sh@42 -- # jq -r . 00:06:18.483 [2024-04-23 16:05:16.808535] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:18.483 [2024-04-23 16:05:16.808655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893346 ] 00:06:18.483 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.483 [2024-04-23 16:05:16.921642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.483 [2024-04-23 16:05:17.016867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.483 [2024-04-23 16:05:17.021422] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:18.483 [2024-04-23 16:05:17.029390] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:25.098 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.098 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.098 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.098 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.098 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.098 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.098 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.098 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=0x1 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=crc32c 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=0 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=dsa 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=32 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=32 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=1 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val=Yes 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:25.099 16:05:23 -- accel/accel.sh@21 -- # val= 00:06:25.099 16:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:25.099 16:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@21 -- # val= 00:06:27.642 16:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:27.642 16:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:27.642 16:05:26 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:27.642 16:05:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:27.642 16:05:26 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:27.642 00:06:27.642 real 0m19.333s 00:06:27.642 user 0m6.494s 00:06:27.642 sys 0m0.487s 00:06:27.642 16:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.643 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.643 ************************************ 00:06:27.643 END TEST accel_crc32c_C2 00:06:27.643 ************************************ 00:06:27.643 16:05:26 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:27.643 16:05:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:27.643 16:05:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.643 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.643 ************************************ 00:06:27.643 START TEST accel_copy 00:06:27.643 ************************************ 00:06:27.643 16:05:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:27.643 16:05:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.643 16:05:26 -- accel/accel.sh@17 -- # local accel_module 00:06:27.643 16:05:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:27.643 16:05:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:27.643 16:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.643 16:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.643 16:05:26 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:27.643 16:05:26 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:27.643 16:05:26 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:27.643 16:05:26 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:27.643 16:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.643 16:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.643 16:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.643 16:05:26 -- accel/accel.sh@42 -- # jq -r . 00:06:27.643 [2024-04-23 16:05:26.504505] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:27.643 [2024-04-23 16:05:26.504618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895176 ] 00:06:27.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.902 [2024-04-23 16:05:26.615818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.902 [2024-04-23 16:05:26.711642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.902 [2024-04-23 16:05:26.716149] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:27.902 [2024-04-23 16:05:26.724118] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:37.931 16:05:36 -- accel/accel.sh@18 -- # out=' 00:06:37.931 SPDK Configuration: 00:06:37.931 Core mask: 0x1 00:06:37.931 00:06:37.931 Accel Perf Configuration: 00:06:37.931 Workload Type: copy 00:06:37.931 Transfer size: 4096 bytes 00:06:37.931 Vector count 1 00:06:37.931 Module: dsa 00:06:37.931 Queue depth: 32 00:06:37.931 Allocate depth: 32 00:06:37.931 # threads/core: 1 00:06:37.931 Run time: 1 seconds 00:06:37.931 Verify: Yes 00:06:37.931 00:06:37.931 Running for 1 seconds... 00:06:37.931 00:06:37.931 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.931 ------------------------------------------------------------------------------------ 00:06:37.931 0,0 230432/s 900 MiB/s 0 0 00:06:37.931 ==================================================================================== 00:06:37.931 Total 230432/s 900 MiB/s 0 0' 00:06:37.931 16:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.931 16:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.931 16:05:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.931 16:05:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.931 16:05:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.931 16:05:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.931 16:05:36 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:37.931 16:05:36 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:37.931 16:05:36 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:37.931 16:05:36 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:37.931 16:05:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.931 16:05:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.931 16:05:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.931 16:05:36 -- accel/accel.sh@42 -- # jq -r . 00:06:37.931 [2024-04-23 16:05:36.217561] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:37.931 [2024-04-23 16:05:36.217818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897250 ] 00:06:37.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.931 [2024-04-23 16:05:36.332120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.931 [2024-04-23 16:05:36.425860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.931 [2024-04-23 16:05:36.430360] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:37.931 [2024-04-23 16:05:36.438332] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val=0x1 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val=copy 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.626 16:05:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.626 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.626 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val=dsa 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@23 -- # accel_module=dsa 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val=32 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val=32 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val=1 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val=Yes 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 16:05:42 -- accel/accel.sh@21 -- # val= 00:06:44.627 16:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 16:05:42 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@21 -- # val= 00:06:47.173 16:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # IFS=: 00:06:47.173 16:05:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.173 16:05:45 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:06:47.173 16:05:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:47.173 16:05:45 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:06:47.173 00:06:47.173 real 0m19.382s 00:06:47.173 user 0m6.563s 00:06:47.173 sys 0m0.466s 00:06:47.173 16:05:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.174 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.174 ************************************ 00:06:47.174 END TEST accel_copy 00:06:47.174 ************************************ 00:06:47.174 16:05:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.174 16:05:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:47.174 16:05:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.174 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.174 ************************************ 00:06:47.174 START TEST accel_fill 00:06:47.174 ************************************ 00:06:47.174 16:05:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.174 16:05:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.174 16:05:45 -- accel/accel.sh@17 -- # local accel_module 00:06:47.174 16:05:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.174 16:05:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.174 16:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.174 16:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.174 16:05:45 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:47.174 16:05:45 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:47.174 16:05:45 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:47.174 16:05:45 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:47.174 16:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.174 16:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.174 16:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.174 16:05:45 -- accel/accel.sh@42 -- # jq -r . 00:06:47.174 [2024-04-23 16:05:45.904408] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:47.174 [2024-04-23 16:05:45.904494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899076 ] 00:06:47.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.174 [2024-04-23 16:05:45.991931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.174 [2024-04-23 16:05:46.082712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.174 [2024-04-23 16:05:46.087258] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:47.174 [2024-04-23 16:05:46.095218] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:57.182 16:05:55 -- accel/accel.sh@18 -- # out=' 00:06:57.182 SPDK Configuration: 00:06:57.182 Core mask: 0x1 00:06:57.182 00:06:57.182 Accel Perf Configuration: 00:06:57.182 Workload Type: fill 00:06:57.182 Fill pattern: 0x80 00:06:57.182 Transfer size: 4096 bytes 00:06:57.182 Vector count 1 00:06:57.182 Module: dsa 00:06:57.182 Queue depth: 64 00:06:57.182 Allocate depth: 64 00:06:57.182 # threads/core: 1 00:06:57.182 Run time: 1 seconds 00:06:57.182 Verify: Yes 00:06:57.182 00:06:57.182 Running for 1 seconds... 00:06:57.182 00:06:57.182 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.182 ------------------------------------------------------------------------------------ 00:06:57.182 0,0 345440/s 1349 MiB/s 0 0 00:06:57.182 ==================================================================================== 00:06:57.182 Total 345440/s 1349 MiB/s 0 0' 00:06:57.182 16:05:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.182 16:05:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.182 16:05:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.182 16:05:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.182 16:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.182 16:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.182 16:05:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:57.182 16:05:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:57.182 16:05:55 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:06:57.182 16:05:55 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:57.182 16:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.182 16:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.182 16:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.182 16:05:55 -- accel/accel.sh@42 -- # jq -r . 00:06:57.182 [2024-04-23 16:05:55.568461] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:06:57.182 [2024-04-23 16:05:55.568582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901163 ] 00:06:57.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.182 [2024-04-23 16:05:55.682215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.182 [2024-04-23 16:05:55.771537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.182 [2024-04-23 16:05:55.776104] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:57.182 [2024-04-23 16:05:55.784073] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=0x1 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=fill 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=0x80 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=dsa 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=64 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=64 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=1 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val=Yes 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.765 16:06:02 -- accel/accel.sh@21 -- # val= 00:07:03.765 16:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.765 16:06:02 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@21 -- # val= 00:07:06.318 16:06:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.318 16:06:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.318 16:06:05 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:06.318 16:06:05 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:06.318 16:06:05 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:06.318 00:07:06.318 real 0m19.310s 00:07:06.318 user 0m6.539s 00:07:06.318 sys 0m0.420s 00:07:06.318 16:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.318 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:06.318 ************************************ 00:07:06.318 END TEST accel_fill 00:07:06.318 ************************************ 00:07:06.318 16:06:05 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:06.318 16:06:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.318 16:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.318 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:06.318 ************************************ 00:07:06.318 START TEST accel_copy_crc32c 00:07:06.318 ************************************ 00:07:06.318 16:06:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:06.318 16:06:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.318 16:06:05 -- accel/accel.sh@17 -- # local accel_module 00:07:06.318 16:06:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.318 16:06:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.318 16:06:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.318 16:06:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.318 16:06:05 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:06.318 16:06:05 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:06.318 16:06:05 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:06.318 16:06:05 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:06.318 16:06:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.318 16:06:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.318 16:06:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.318 16:06:05 -- accel/accel.sh@42 -- # jq -r . 00:07:06.318 [2024-04-23 16:06:05.239487] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:06.318 [2024-04-23 16:06:05.239567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903247 ] 00:07:06.578 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.578 [2024-04-23 16:06:05.324077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.578 [2024-04-23 16:06:05.414284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.578 [2024-04-23 16:06:05.418916] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:06.578 [2024-04-23 16:06:05.426885] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:16.576 16:06:14 -- accel/accel.sh@18 -- # out=' 00:07:16.576 SPDK Configuration: 00:07:16.576 Core mask: 0x1 00:07:16.576 00:07:16.577 Accel Perf Configuration: 00:07:16.577 Workload Type: copy_crc32c 00:07:16.577 CRC-32C seed: 0 00:07:16.577 Vector size: 4096 bytes 00:07:16.577 Transfer size: 4096 bytes 00:07:16.577 Vector count 1 00:07:16.577 Module: dsa 00:07:16.577 Queue depth: 32 00:07:16.577 Allocate depth: 32 00:07:16.577 # threads/core: 1 00:07:16.577 Run time: 1 seconds 00:07:16.577 Verify: Yes 00:07:16.577 00:07:16.577 Running for 1 seconds... 00:07:16.577 00:07:16.577 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.577 ------------------------------------------------------------------------------------ 00:07:16.577 0,0 208416/s 814 MiB/s 0 0 00:07:16.577 ==================================================================================== 00:07:16.577 Total 208416/s 814 MiB/s 0 0' 00:07:16.577 16:06:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.577 16:06:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.577 16:06:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:16.577 16:06:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:16.577 16:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.577 16:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.577 16:06:14 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:16.577 16:06:14 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:16.577 16:06:14 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:16.577 16:06:14 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:16.577 16:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.577 16:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.577 16:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.577 16:06:14 -- accel/accel.sh@42 -- # jq -r . 00:07:16.577 [2024-04-23 16:06:14.899573] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:16.577 [2024-04-23 16:06:14.899703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905420 ] 00:07:16.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.577 [2024-04-23 16:06:15.016143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.577 [2024-04-23 16:06:15.107546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.577 [2024-04-23 16:06:15.112106] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:16.577 [2024-04-23 16:06:15.120075] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=0x1 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=0 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=dsa 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=32 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=32 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=1 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val=Yes 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.170 16:06:21 -- accel/accel.sh@21 -- # val= 00:07:23.170 16:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.170 16:06:21 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@21 -- # val= 00:07:25.717 16:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # IFS=: 00:07:25.717 16:06:24 -- accel/accel.sh@20 -- # read -r var val 00:07:25.717 16:06:24 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:25.717 16:06:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:25.717 16:06:24 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:25.717 00:07:25.717 real 0m19.304s 00:07:25.717 user 0m6.519s 00:07:25.717 sys 0m0.417s 00:07:25.717 16:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.717 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:25.717 ************************************ 00:07:25.717 END TEST accel_copy_crc32c 00:07:25.717 ************************************ 00:07:25.717 16:06:24 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.717 16:06:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:25.717 16:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.717 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:25.717 ************************************ 00:07:25.717 START TEST accel_copy_crc32c_C2 00:07:25.717 ************************************ 00:07:25.717 16:06:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:25.717 16:06:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.717 16:06:24 -- accel/accel.sh@17 -- # local accel_module 00:07:25.717 16:06:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:25.717 16:06:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:25.717 16:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.717 16:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.717 16:06:24 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:25.717 16:06:24 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:25.717 16:06:24 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:25.717 16:06:24 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:25.717 16:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.717 16:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.717 16:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.717 16:06:24 -- accel/accel.sh@42 -- # jq -r . 00:07:25.717 [2024-04-23 16:06:24.569666] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:25.717 [2024-04-23 16:06:24.569744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907451 ] 00:07:25.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.978 [2024-04-23 16:06:24.655090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.978 [2024-04-23 16:06:24.743856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.978 [2024-04-23 16:06:24.748386] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:25.978 [2024-04-23 16:06:24.756352] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:35.969 16:06:34 -- accel/accel.sh@18 -- # out=' 00:07:35.969 SPDK Configuration: 00:07:35.969 Core mask: 0x1 00:07:35.969 00:07:35.969 Accel Perf Configuration: 00:07:35.969 Workload Type: copy_crc32c 00:07:35.969 CRC-32C seed: 0 00:07:35.969 Vector size: 4096 bytes 00:07:35.969 Transfer size: 8192 bytes 00:07:35.969 Vector count 2 00:07:35.969 Module: dsa 00:07:35.969 Queue depth: 32 00:07:35.969 Allocate depth: 32 00:07:35.969 # threads/core: 1 00:07:35.969 Run time: 1 seconds 00:07:35.969 Verify: Yes 00:07:35.969 00:07:35.969 Running for 1 seconds... 00:07:35.969 00:07:35.969 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.969 ------------------------------------------------------------------------------------ 00:07:35.969 0,0 141157/s 1102 MiB/s 0 0 00:07:35.969 ==================================================================================== 00:07:35.969 Total 141157/s 551 MiB/s 0 0' 00:07:35.969 16:06:34 -- accel/accel.sh@20 -- # IFS=: 00:07:35.969 16:06:34 -- accel/accel.sh@20 -- # read -r var val 00:07:35.969 16:06:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:35.969 16:06:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:35.969 16:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.969 16:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.969 16:06:34 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:35.969 16:06:34 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:35.969 16:06:34 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:35.969 16:06:34 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:35.969 16:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.969 16:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.969 16:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.969 16:06:34 -- accel/accel.sh@42 -- # jq -r . 00:07:35.969 [2024-04-23 16:06:34.180668] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:35.969 [2024-04-23 16:06:34.180754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909257 ] 00:07:35.969 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.969 [2024-04-23 16:06:34.267807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.969 [2024-04-23 16:06:34.357593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.969 [2024-04-23 16:06:34.362147] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:35.969 [2024-04-23 16:06:34.370117] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:42.552 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.552 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.552 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.552 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.552 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=0x1 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=0 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=dsa 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@23 -- # accel_module=dsa 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=32 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=32 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=1 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val=Yes 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.553 16:06:40 -- accel/accel.sh@21 -- # val= 00:07:42.553 16:06:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.553 16:06:40 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@21 -- # val= 00:07:45.097 16:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.097 16:06:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.097 16:06:43 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:07:45.097 16:06:43 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:45.097 16:06:43 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:07:45.097 00:07:45.097 real 0m19.270s 00:07:45.097 user 0m6.504s 00:07:45.097 sys 0m0.385s 00:07:45.097 16:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.097 16:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.097 ************************************ 00:07:45.097 END TEST accel_copy_crc32c_C2 00:07:45.097 ************************************ 00:07:45.097 16:06:43 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:45.097 16:06:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:45.097 16:06:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.097 16:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.097 ************************************ 00:07:45.097 START TEST accel_dualcast 00:07:45.097 ************************************ 00:07:45.097 16:06:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:45.097 16:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.097 16:06:43 -- accel/accel.sh@17 -- # local accel_module 00:07:45.097 16:06:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:45.097 16:06:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:45.097 16:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.097 16:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.097 16:06:43 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:45.097 16:06:43 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:45.097 16:06:43 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:45.097 16:06:43 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:45.097 16:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.097 16:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.097 16:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.097 16:06:43 -- accel/accel.sh@42 -- # jq -r . 00:07:45.097 [2024-04-23 16:06:43.884820] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:45.097 [2024-04-23 16:06:43.884925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911352 ] 00:07:45.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.097 [2024-04-23 16:06:43.980218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.358 [2024-04-23 16:06:44.069826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.358 [2024-04-23 16:06:44.074423] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:45.358 [2024-04-23 16:06:44.082372] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:55.358 16:06:53 -- accel/accel.sh@18 -- # out=' 00:07:55.358 SPDK Configuration: 00:07:55.358 Core mask: 0x1 00:07:55.358 00:07:55.358 Accel Perf Configuration: 00:07:55.358 Workload Type: dualcast 00:07:55.358 Transfer size: 4096 bytes 00:07:55.358 Vector count 1 00:07:55.358 Module: dsa 00:07:55.358 Queue depth: 32 00:07:55.358 Allocate depth: 32 00:07:55.358 # threads/core: 1 00:07:55.358 Run time: 1 seconds 00:07:55.358 Verify: Yes 00:07:55.358 00:07:55.358 Running for 1 seconds... 00:07:55.358 00:07:55.358 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.358 ------------------------------------------------------------------------------------ 00:07:55.358 0,0 216288/s 844 MiB/s 0 0 00:07:55.358 ==================================================================================== 00:07:55.358 Total 216288/s 844 MiB/s 0 0' 00:07:55.358 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.358 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.358 16:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:55.358 16:06:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:55.358 16:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.358 16:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.358 16:06:53 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:55.358 16:06:53 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:55.359 16:06:53 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:07:55.359 16:06:53 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:55.359 16:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.359 16:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.359 16:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.359 16:06:53 -- accel/accel.sh@42 -- # jq -r . 00:07:55.359 [2024-04-23 16:06:53.535334] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:07:55.359 [2024-04-23 16:06:53.535455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913158 ] 00:07:55.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.359 [2024-04-23 16:06:53.650905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.359 [2024-04-23 16:06:53.739695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.359 [2024-04-23 16:06:53.744218] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:55.359 [2024-04-23 16:06:53.752188] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=0x1 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=dualcast 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=dsa 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=32 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=32 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=1 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val=Yes 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:01.941 16:07:00 -- accel/accel.sh@21 -- # val= 00:08:01.941 16:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # IFS=: 00:08:01.941 16:07:00 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@21 -- # val= 00:08:04.484 16:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # IFS=: 00:08:04.484 16:07:03 -- accel/accel.sh@20 -- # read -r var val 00:08:04.484 16:07:03 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:04.484 16:07:03 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:04.484 16:07:03 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:04.484 00:08:04.484 real 0m19.317s 00:08:04.484 user 0m6.529s 00:08:04.484 sys 0m0.429s 00:08:04.484 16:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.484 16:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:04.484 ************************************ 00:08:04.484 END TEST accel_dualcast 00:08:04.484 ************************************ 00:08:04.484 16:07:03 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:04.484 16:07:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:04.484 16:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.484 16:07:03 -- common/autotest_common.sh@10 -- # set +x 00:08:04.484 ************************************ 00:08:04.484 START TEST accel_compare 00:08:04.484 ************************************ 00:08:04.484 16:07:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:04.484 16:07:03 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.484 16:07:03 -- accel/accel.sh@17 -- # local accel_module 00:08:04.484 16:07:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:04.484 16:07:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:04.484 16:07:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.484 16:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.484 16:07:03 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:04.484 16:07:03 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:04.484 16:07:03 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:04.484 16:07:03 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:04.484 16:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.484 16:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.484 16:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.484 16:07:03 -- accel/accel.sh@42 -- # jq -r . 00:08:04.484 [2024-04-23 16:07:03.219910] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:04.484 [2024-04-23 16:07:03.219986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915138 ] 00:08:04.484 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.484 [2024-04-23 16:07:03.305491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.484 [2024-04-23 16:07:03.395873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.484 [2024-04-23 16:07:03.400372] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:04.484 [2024-04-23 16:07:03.408345] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:14.538 16:07:12 -- accel/accel.sh@18 -- # out=' 00:08:14.538 SPDK Configuration: 00:08:14.538 Core mask: 0x1 00:08:14.538 00:08:14.538 Accel Perf Configuration: 00:08:14.538 Workload Type: compare 00:08:14.538 Transfer size: 4096 bytes 00:08:14.538 Vector count 1 00:08:14.538 Module: dsa 00:08:14.538 Queue depth: 32 00:08:14.538 Allocate depth: 32 00:08:14.538 # threads/core: 1 00:08:14.538 Run time: 1 seconds 00:08:14.538 Verify: Yes 00:08:14.538 00:08:14.538 Running for 1 seconds... 00:08:14.538 00:08:14.538 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:14.538 ------------------------------------------------------------------------------------ 00:08:14.538 0,0 241600/s 943 MiB/s 0 0 00:08:14.538 ==================================================================================== 00:08:14.538 Total 241600/s 943 MiB/s 0 0' 00:08:14.538 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:08:14.538 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:08:14.538 16:07:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:14.538 16:07:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:14.538 16:07:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.538 16:07:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.538 16:07:12 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:14.538 16:07:12 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:14.538 16:07:12 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:14.538 16:07:12 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:14.538 16:07:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.538 16:07:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.538 16:07:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.538 16:07:12 -- accel/accel.sh@42 -- # jq -r . 00:08:14.538 [2024-04-23 16:07:12.855118] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:14.538 [2024-04-23 16:07:12.855246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917064 ] 00:08:14.538 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.538 [2024-04-23 16:07:12.967699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.538 [2024-04-23 16:07:13.057130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.538 [2024-04-23 16:07:13.061661] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:14.538 [2024-04-23 16:07:13.069639] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=0x1 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=compare 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=dsa 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@23 -- # accel_module=dsa 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=32 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=32 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=1 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val=Yes 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:21.232 16:07:19 -- accel/accel.sh@21 -- # val= 00:08:21.232 16:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # IFS=: 00:08:21.232 16:07:19 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@21 -- # val= 00:08:23.781 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:08:23.781 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:08:23.781 16:07:22 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:08:23.781 16:07:22 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:23.781 16:07:22 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:08:23.781 00:08:23.781 real 0m19.306s 00:08:23.781 user 0m6.542s 00:08:23.781 sys 0m0.399s 00:08:23.781 16:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.781 16:07:22 -- common/autotest_common.sh@10 -- # set +x 00:08:23.781 ************************************ 00:08:23.781 END TEST accel_compare 00:08:23.781 ************************************ 00:08:23.781 16:07:22 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:23.781 16:07:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:23.781 16:07:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.781 16:07:22 -- common/autotest_common.sh@10 -- # set +x 00:08:23.781 ************************************ 00:08:23.781 START TEST accel_xor 00:08:23.781 ************************************ 00:08:23.781 16:07:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:23.781 16:07:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.781 16:07:22 -- accel/accel.sh@17 -- # local accel_module 00:08:23.781 16:07:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:23.781 16:07:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:23.781 16:07:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.781 16:07:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:23.781 16:07:22 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:23.781 16:07:22 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:23.781 16:07:22 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:23.781 16:07:22 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:23.781 16:07:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:23.781 16:07:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:23.781 16:07:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:23.781 16:07:22 -- accel/accel.sh@42 -- # jq -r . 00:08:23.781 [2024-04-23 16:07:22.570598] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:23.781 [2024-04-23 16:07:22.570724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919057 ] 00:08:23.781 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.781 [2024-04-23 16:07:22.687326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.042 [2024-04-23 16:07:22.777012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.042 [2024-04-23 16:07:22.781555] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:24.042 [2024-04-23 16:07:22.789522] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:34.047 16:07:32 -- accel/accel.sh@18 -- # out=' 00:08:34.047 SPDK Configuration: 00:08:34.047 Core mask: 0x1 00:08:34.047 00:08:34.047 Accel Perf Configuration: 00:08:34.047 Workload Type: xor 00:08:34.047 Source buffers: 2 00:08:34.047 Transfer size: 4096 bytes 00:08:34.047 Vector count 1 00:08:34.047 Module: software 00:08:34.047 Queue depth: 32 00:08:34.047 Allocate depth: 32 00:08:34.047 # threads/core: 1 00:08:34.047 Run time: 1 seconds 00:08:34.047 Verify: Yes 00:08:34.047 00:08:34.047 Running for 1 seconds... 00:08:34.047 00:08:34.047 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:34.047 ------------------------------------------------------------------------------------ 00:08:34.047 0,0 451968/s 1765 MiB/s 0 0 00:08:34.047 ==================================================================================== 00:08:34.047 Total 451968/s 1765 MiB/s 0 0' 00:08:34.047 16:07:32 -- accel/accel.sh@20 -- # IFS=: 00:08:34.047 16:07:32 -- accel/accel.sh@20 -- # read -r var val 00:08:34.047 16:07:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:34.047 16:07:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:34.047 16:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.047 16:07:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.047 16:07:32 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:34.047 16:07:32 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:34.047 16:07:32 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:34.047 16:07:32 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:34.047 16:07:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.047 16:07:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.047 16:07:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.047 16:07:32 -- accel/accel.sh@42 -- # jq -r . 00:08:34.047 [2024-04-23 16:07:32.220566] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:34.047 [2024-04-23 16:07:32.220648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920968 ] 00:08:34.047 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.047 [2024-04-23 16:07:32.303157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.047 [2024-04-23 16:07:32.391371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.047 [2024-04-23 16:07:32.395988] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:34.047 [2024-04-23 16:07:32.403959] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=0x1 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=xor 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=2 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=software 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=32 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=32 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=1 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val=Yes 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:40.626 16:07:38 -- accel/accel.sh@21 -- # val= 00:08:40.626 16:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:08:40.626 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@21 -- # val= 00:08:43.166 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:08:43.166 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:08:43.166 16:07:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:43.166 16:07:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:43.166 16:07:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.166 00:08:43.166 real 0m19.280s 00:08:43.166 user 0m6.504s 00:08:43.166 sys 0m0.416s 00:08:43.166 16:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.166 16:07:41 -- common/autotest_common.sh@10 -- # set +x 00:08:43.166 ************************************ 00:08:43.166 END TEST accel_xor 00:08:43.166 ************************************ 00:08:43.166 16:07:41 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:43.166 16:07:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:43.166 16:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.166 16:07:41 -- common/autotest_common.sh@10 -- # set +x 00:08:43.166 ************************************ 00:08:43.166 START TEST accel_xor 00:08:43.166 ************************************ 00:08:43.166 16:07:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:43.166 16:07:41 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.166 16:07:41 -- accel/accel.sh@17 -- # local accel_module 00:08:43.166 16:07:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:43.166 16:07:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:43.166 16:07:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.166 16:07:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.166 16:07:41 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:43.166 16:07:41 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:43.166 16:07:41 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:43.166 16:07:41 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:43.166 16:07:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.166 16:07:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.166 16:07:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.166 16:07:41 -- accel/accel.sh@42 -- # jq -r . 00:08:43.166 [2024-04-23 16:07:41.881838] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:43.166 [2024-04-23 16:07:41.881962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922862 ] 00:08:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.166 [2024-04-23 16:07:42.000722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.166 [2024-04-23 16:07:42.091651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.166 [2024-04-23 16:07:42.096192] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:43.428 [2024-04-23 16:07:42.104162] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:53.431 16:07:51 -- accel/accel.sh@18 -- # out=' 00:08:53.431 SPDK Configuration: 00:08:53.431 Core mask: 0x1 00:08:53.431 00:08:53.431 Accel Perf Configuration: 00:08:53.431 Workload Type: xor 00:08:53.431 Source buffers: 3 00:08:53.431 Transfer size: 4096 bytes 00:08:53.431 Vector count 1 00:08:53.431 Module: software 00:08:53.431 Queue depth: 32 00:08:53.431 Allocate depth: 32 00:08:53.431 # threads/core: 1 00:08:53.431 Run time: 1 seconds 00:08:53.431 Verify: Yes 00:08:53.431 00:08:53.431 Running for 1 seconds... 00:08:53.431 00:08:53.431 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:53.431 ------------------------------------------------------------------------------------ 00:08:53.431 0,0 443296/s 1731 MiB/s 0 0 00:08:53.431 ==================================================================================== 00:08:53.431 Total 443296/s 1731 MiB/s 0 0' 00:08:53.431 16:07:51 -- accel/accel.sh@20 -- # IFS=: 00:08:53.431 16:07:51 -- accel/accel.sh@20 -- # read -r var val 00:08:53.431 16:07:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:53.431 16:07:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:53.431 16:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.431 16:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:53.431 16:07:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:53.431 16:07:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:53.431 16:07:51 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:08:53.431 16:07:51 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:53.431 16:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:53.431 16:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:53.431 16:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:53.431 16:07:51 -- accel/accel.sh@42 -- # jq -r . 00:08:53.431 [2024-04-23 16:07:51.558077] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:08:53.431 [2024-04-23 16:07:51.558165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924867 ] 00:08:53.431 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.431 [2024-04-23 16:07:51.644461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.431 [2024-04-23 16:07:51.733129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.431 [2024-04-23 16:07:51.737742] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:53.431 [2024-04-23 16:07:51.745711] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=0x1 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=xor 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=3 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=software 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@23 -- # accel_module=software 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=32 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=32 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=1 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val=Yes 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:00.017 16:07:58 -- accel/accel.sh@21 -- # val= 00:09:00.017 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:09:00.017 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@21 -- # val= 00:09:02.557 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:09:02.557 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:09:02.557 16:08:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:02.557 16:08:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:02.557 16:08:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.557 00:09:02.557 real 0m19.289s 00:09:02.557 user 0m6.503s 00:09:02.557 sys 0m0.433s 00:09:02.557 16:08:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.557 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 ************************************ 00:09:02.557 END TEST accel_xor 00:09:02.557 ************************************ 00:09:02.557 16:08:01 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:02.557 16:08:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:02.557 16:08:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.557 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 ************************************ 00:09:02.557 START TEST accel_dif_verify 00:09:02.557 ************************************ 00:09:02.557 16:08:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:02.557 16:08:01 -- accel/accel.sh@16 -- # local accel_opc 00:09:02.557 16:08:01 -- accel/accel.sh@17 -- # local accel_module 00:09:02.557 16:08:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:02.557 16:08:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:02.557 16:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.557 16:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.557 16:08:01 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:02.557 16:08:01 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:02.557 16:08:01 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:02.557 16:08:01 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:02.557 16:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.557 16:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.557 16:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.557 16:08:01 -- accel/accel.sh@42 -- # jq -r . 00:09:02.557 [2024-04-23 16:08:01.198620] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:02.557 [2024-04-23 16:08:01.198756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926685 ] 00:09:02.557 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.557 [2024-04-23 16:08:01.309991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.557 [2024-04-23 16:08:01.399083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.557 [2024-04-23 16:08:01.403607] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:02.557 [2024-04-23 16:08:01.411616] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:12.559 16:08:10 -- accel/accel.sh@18 -- # out=' 00:09:12.559 SPDK Configuration: 00:09:12.559 Core mask: 0x1 00:09:12.559 00:09:12.559 Accel Perf Configuration: 00:09:12.559 Workload Type: dif_verify 00:09:12.559 Vector size: 4096 bytes 00:09:12.559 Transfer size: 4096 bytes 00:09:12.559 Block size: 512 bytes 00:09:12.559 Metadata size: 8 bytes 00:09:12.559 Vector count 1 00:09:12.559 Module: dsa 00:09:12.559 Queue depth: 32 00:09:12.559 Allocate depth: 32 00:09:12.559 # threads/core: 1 00:09:12.559 Run time: 1 seconds 00:09:12.559 Verify: No 00:09:12.559 00:09:12.559 Running for 1 seconds... 00:09:12.559 00:09:12.559 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:12.559 ------------------------------------------------------------------------------------ 00:09:12.560 0,0 356096/s 1412 MiB/s 0 0 00:09:12.560 ==================================================================================== 00:09:12.560 Total 356096/s 1391 MiB/s 0 0' 00:09:12.560 16:08:10 -- accel/accel.sh@20 -- # IFS=: 00:09:12.560 16:08:10 -- accel/accel.sh@20 -- # read -r var val 00:09:12.560 16:08:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:12.560 16:08:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:12.560 16:08:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:12.560 16:08:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:12.560 16:08:10 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:12.560 16:08:10 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:12.560 16:08:10 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:12.560 16:08:10 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:12.560 16:08:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:12.560 16:08:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:12.560 16:08:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:12.560 16:08:10 -- accel/accel.sh@42 -- # jq -r . 00:09:12.560 [2024-04-23 16:08:10.843677] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:12.560 [2024-04-23 16:08:10.843758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928769 ] 00:09:12.560 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.560 [2024-04-23 16:08:10.929538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.560 [2024-04-23 16:08:11.024091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.560 [2024-04-23 16:08:11.028642] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:12.560 [2024-04-23 16:08:11.036605] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=0x1 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=dif_verify 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=dsa 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@23 -- # accel_module=dsa 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=32 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=32 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=1 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val=No 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:19.138 16:08:17 -- accel/accel.sh@21 -- # val= 00:09:19.138 16:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # IFS=: 00:09:19.138 16:08:17 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@21 -- # val= 00:09:21.684 16:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # IFS=: 00:09:21.684 16:08:20 -- accel/accel.sh@20 -- # read -r var val 00:09:21.684 16:08:20 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:09:21.684 16:08:20 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:21.684 16:08:20 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:09:21.684 00:09:21.684 real 0m19.279s 00:09:21.684 user 0m6.492s 00:09:21.684 sys 0m0.422s 00:09:21.684 16:08:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.684 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:21.684 ************************************ 00:09:21.684 END TEST accel_dif_verify 00:09:21.684 ************************************ 00:09:21.684 16:08:20 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:21.684 16:08:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:21.684 16:08:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.684 16:08:20 -- common/autotest_common.sh@10 -- # set +x 00:09:21.684 ************************************ 00:09:21.684 START TEST accel_dif_generate 00:09:21.684 ************************************ 00:09:21.684 16:08:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:09:21.684 16:08:20 -- accel/accel.sh@16 -- # local accel_opc 00:09:21.684 16:08:20 -- accel/accel.sh@17 -- # local accel_module 00:09:21.684 16:08:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:21.684 16:08:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:21.684 16:08:20 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.684 16:08:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.684 16:08:20 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:21.684 16:08:20 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:21.684 16:08:20 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:21.684 16:08:20 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:21.684 16:08:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.684 16:08:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.684 16:08:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.684 16:08:20 -- accel/accel.sh@42 -- # jq -r . 00:09:21.684 [2024-04-23 16:08:20.525739] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:21.684 [2024-04-23 16:08:20.525884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930587 ] 00:09:21.684 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.946 [2024-04-23 16:08:20.656805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.946 [2024-04-23 16:08:20.752816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.946 [2024-04-23 16:08:20.757424] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:21.946 [2024-04-23 16:08:20.765378] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:31.954 16:08:30 -- accel/accel.sh@18 -- # out=' 00:09:31.954 SPDK Configuration: 00:09:31.954 Core mask: 0x1 00:09:31.954 00:09:31.954 Accel Perf Configuration: 00:09:31.954 Workload Type: dif_generate 00:09:31.954 Vector size: 4096 bytes 00:09:31.954 Transfer size: 4096 bytes 00:09:31.954 Block size: 512 bytes 00:09:31.954 Metadata size: 8 bytes 00:09:31.954 Vector count 1 00:09:31.954 Module: software 00:09:31.954 Queue depth: 32 00:09:31.954 Allocate depth: 32 00:09:31.954 # threads/core: 1 00:09:31.954 Run time: 1 seconds 00:09:31.954 Verify: No 00:09:31.954 00:09:31.954 Running for 1 seconds... 00:09:31.954 00:09:31.954 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:31.954 ------------------------------------------------------------------------------------ 00:09:31.954 0,0 157056/s 623 MiB/s 0 0 00:09:31.954 ==================================================================================== 00:09:31.954 Total 157056/s 613 MiB/s 0 0' 00:09:31.954 16:08:30 -- accel/accel.sh@20 -- # IFS=: 00:09:31.954 16:08:30 -- accel/accel.sh@20 -- # read -r var val 00:09:31.954 16:08:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:31.954 16:08:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:31.954 16:08:30 -- accel/accel.sh@12 -- # build_accel_config 00:09:31.954 16:08:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:31.954 16:08:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:31.954 16:08:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:31.954 16:08:30 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:31.954 16:08:30 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:31.954 16:08:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:31.954 16:08:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:31.954 16:08:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:31.954 16:08:30 -- accel/accel.sh@42 -- # jq -r . 00:09:31.954 [2024-04-23 16:08:30.284336] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:31.954 [2024-04-23 16:08:30.284485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932457 ] 00:09:31.954 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.954 [2024-04-23 16:08:30.423523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.954 [2024-04-23 16:08:30.533463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.954 [2024-04-23 16:08:30.538292] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:31.954 [2024-04-23 16:08:30.546241] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=0x1 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=dif_generate 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=software 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@23 -- # accel_module=software 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=32 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=32 00:09:38.539 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.539 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.539 16:08:36 -- accel/accel.sh@21 -- # val=1 00:09:38.540 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.540 16:08:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:38.540 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.540 16:08:36 -- accel/accel.sh@21 -- # val=No 00:09:38.540 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.540 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.540 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:38.540 16:08:36 -- accel/accel.sh@21 -- # val= 00:09:38.540 16:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # IFS=: 00:09:38.540 16:08:36 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@21 -- # val= 00:09:41.107 16:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # IFS=: 00:09:41.107 16:08:39 -- accel/accel.sh@20 -- # read -r var val 00:09:41.107 16:08:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:41.107 16:08:39 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:09:41.107 16:08:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.107 00:09:41.107 real 0m19.476s 00:09:41.107 user 0m6.589s 00:09:41.107 sys 0m0.518s 00:09:41.107 16:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.107 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:09:41.107 ************************************ 00:09:41.107 END TEST accel_dif_generate 00:09:41.107 ************************************ 00:09:41.107 16:08:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:41.107 16:08:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:41.107 16:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.107 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:09:41.107 ************************************ 00:09:41.107 START TEST accel_dif_generate_copy 00:09:41.107 ************************************ 00:09:41.107 16:08:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:09:41.107 16:08:39 -- accel/accel.sh@16 -- # local accel_opc 00:09:41.107 16:08:39 -- accel/accel.sh@17 -- # local accel_module 00:09:41.107 16:08:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:09:41.107 16:08:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:41.107 16:08:39 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.107 16:08:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.107 16:08:39 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:41.107 16:08:39 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:41.107 16:08:39 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:41.107 16:08:39 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:41.107 16:08:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.107 16:08:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.107 16:08:39 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.107 16:08:39 -- accel/accel.sh@42 -- # jq -r . 00:09:41.107 [2024-04-23 16:08:40.022731] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:41.107 [2024-04-23 16:08:40.022845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934487 ] 00:09:41.369 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.369 [2024-04-23 16:08:40.137255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.369 [2024-04-23 16:08:40.233313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.369 [2024-04-23 16:08:40.237888] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:41.369 [2024-04-23 16:08:40.245856] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:51.356 16:08:49 -- accel/accel.sh@18 -- # out=' 00:09:51.356 SPDK Configuration: 00:09:51.356 Core mask: 0x1 00:09:51.356 00:09:51.356 Accel Perf Configuration: 00:09:51.356 Workload Type: dif_generate_copy 00:09:51.356 Vector size: 4096 bytes 00:09:51.356 Transfer size: 4096 bytes 00:09:51.356 Vector count 1 00:09:51.356 Module: dsa 00:09:51.356 Queue depth: 32 00:09:51.356 Allocate depth: 32 00:09:51.356 # threads/core: 1 00:09:51.356 Run time: 1 seconds 00:09:51.356 Verify: No 00:09:51.356 00:09:51.356 Running for 1 seconds... 00:09:51.356 00:09:51.356 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:51.356 ------------------------------------------------------------------------------------ 00:09:51.356 0,0 337920/s 1340 MiB/s 0 0 00:09:51.356 ==================================================================================== 00:09:51.356 Total 337920/s 1320 MiB/s 0 0' 00:09:51.356 16:08:49 -- accel/accel.sh@20 -- # IFS=: 00:09:51.356 16:08:49 -- accel/accel.sh@20 -- # read -r var val 00:09:51.356 16:08:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:51.356 16:08:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:51.356 16:08:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.356 16:08:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.356 16:08:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:51.356 16:08:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:51.356 16:08:49 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:09:51.356 16:08:49 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:51.356 16:08:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.356 16:08:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.356 16:08:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.356 16:08:49 -- accel/accel.sh@42 -- # jq -r . 00:09:51.356 [2024-04-23 16:08:49.695721] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:09:51.356 [2024-04-23 16:08:49.695800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936286 ] 00:09:51.356 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.356 [2024-04-23 16:08:49.781202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.356 [2024-04-23 16:08:49.877720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.356 [2024-04-23 16:08:49.882236] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:51.356 [2024-04-23 16:08:49.890206] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=0x1 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=dsa 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@23 -- # accel_module=dsa 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=32 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=32 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val=1 00:09:58.052 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.052 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.052 16:08:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:58.053 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.053 16:08:56 -- accel/accel.sh@21 -- # val=No 00:09:58.053 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.053 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.053 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:09:58.053 16:08:56 -- accel/accel.sh@21 -- # val= 00:09:58.053 16:08:56 -- accel/accel.sh@22 -- # case "$var" in 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # IFS=: 00:09:58.053 16:08:56 -- accel/accel.sh@20 -- # read -r var val 00:10:00.586 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@21 -- # val= 00:10:00.587 16:08:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # IFS=: 00:10:00.587 16:08:59 -- accel/accel.sh@20 -- # read -r var val 00:10:00.587 16:08:59 -- accel/accel.sh@28 -- # [[ -n dsa ]] 00:10:00.587 16:08:59 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:00.587 16:08:59 -- accel/accel.sh@28 -- # [[ dsa == \d\s\a ]] 00:10:00.587 00:10:00.587 real 0m19.343s 00:10:00.587 user 0m6.534s 00:10:00.587 sys 0m0.432s 00:10:00.587 16:08:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.587 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:10:00.587 ************************************ 00:10:00.587 END TEST accel_dif_generate_copy 00:10:00.587 ************************************ 00:10:00.587 16:08:59 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:00.587 16:08:59 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:00.587 16:08:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:00.587 16:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.587 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:10:00.587 ************************************ 00:10:00.587 START TEST accel_comp 00:10:00.587 ************************************ 00:10:00.587 16:08:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:00.587 16:08:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:00.587 16:08:59 -- accel/accel.sh@17 -- # local accel_module 00:10:00.587 16:08:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:00.587 16:08:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:00.587 16:08:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.587 16:08:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.587 16:08:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:00.587 16:08:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:00.587 16:08:59 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:00.587 16:08:59 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:00.587 16:08:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.587 16:08:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.587 16:08:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.587 16:08:59 -- accel/accel.sh@42 -- # jq -r . 00:10:00.587 [2024-04-23 16:08:59.394438] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:00.587 [2024-04-23 16:08:59.394556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938367 ] 00:10:00.587 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.587 [2024-04-23 16:08:59.506428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.846 [2024-04-23 16:08:59.601510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.846 [2024-04-23 16:08:59.606013] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:00.846 [2024-04-23 16:08:59.613984] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:10.830 16:09:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:10.830 00:10:10.830 SPDK Configuration: 00:10:10.830 Core mask: 0x1 00:10:10.830 00:10:10.830 Accel Perf Configuration: 00:10:10.830 Workload Type: compress 00:10:10.830 Transfer size: 4096 bytes 00:10:10.830 Vector count 1 00:10:10.830 Module: iaa 00:10:10.830 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:10.830 Queue depth: 32 00:10:10.830 Allocate depth: 32 00:10:10.830 # threads/core: 1 00:10:10.830 Run time: 1 seconds 00:10:10.830 Verify: No 00:10:10.830 00:10:10.830 Running for 1 seconds... 00:10:10.830 00:10:10.830 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:10.830 ------------------------------------------------------------------------------------ 00:10:10.830 0,0 282832/s 1178 MiB/s 0 0 00:10:10.830 ==================================================================================== 00:10:10.830 Total 282832/s 1104 MiB/s 0 0' 00:10:10.830 16:09:09 -- accel/accel.sh@20 -- # IFS=: 00:10:10.830 16:09:09 -- accel/accel.sh@20 -- # read -r var val 00:10:10.830 16:09:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:10.830 16:09:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:10.830 16:09:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.830 16:09:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.830 16:09:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:10.830 16:09:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:10.830 16:09:09 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:10.830 16:09:09 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:10.830 16:09:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.830 16:09:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.830 16:09:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.830 16:09:09 -- accel/accel.sh@42 -- # jq -r . 00:10:10.830 [2024-04-23 16:09:09.065085] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:10.830 [2024-04-23 16:09:09.065205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2940751 ] 00:10:10.830 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.830 [2024-04-23 16:09:09.180632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.830 [2024-04-23 16:09:09.278346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.830 [2024-04-23 16:09:09.282903] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:10.830 [2024-04-23 16:09:09.290871] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:17.399 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.399 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.399 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.399 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.399 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.399 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.399 16:09:15 -- accel/accel.sh@21 -- # val=0x1 00:10:17.399 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.399 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=compress 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=iaa 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@23 -- # accel_module=iaa 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=32 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=32 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=1 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val=No 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:17.400 16:09:15 -- accel/accel.sh@21 -- # val= 00:10:17.400 16:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # IFS=: 00:10:17.400 16:09:15 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@21 -- # val= 00:10:19.936 16:09:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # IFS=: 00:10:19.936 16:09:18 -- accel/accel.sh@20 -- # read -r var val 00:10:19.936 16:09:18 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:10:19.936 16:09:18 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:19.936 16:09:18 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:10:19.936 00:10:19.936 real 0m19.361s 00:10:19.936 user 0m6.554s 00:10:19.936 sys 0m0.457s 00:10:19.936 16:09:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.936 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:10:19.936 ************************************ 00:10:19.936 END TEST accel_comp 00:10:19.936 ************************************ 00:10:19.936 16:09:18 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:19.936 16:09:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:19.936 16:09:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.936 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:10:19.936 ************************************ 00:10:19.936 START TEST accel_decomp 00:10:19.936 ************************************ 00:10:19.936 16:09:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:19.936 16:09:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.936 16:09:18 -- accel/accel.sh@17 -- # local accel_module 00:10:19.936 16:09:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:19.936 16:09:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:19.936 16:09:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.936 16:09:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.936 16:09:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:19.936 16:09:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:19.936 16:09:18 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:19.936 16:09:18 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:19.936 16:09:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.936 16:09:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.936 16:09:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.936 16:09:18 -- accel/accel.sh@42 -- # jq -r . 00:10:19.936 [2024-04-23 16:09:18.786353] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:19.936 [2024-04-23 16:09:18.786471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942849 ] 00:10:19.936 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.196 [2024-04-23 16:09:18.901825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.196 [2024-04-23 16:09:18.995874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.197 [2024-04-23 16:09:19.000499] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:20.197 [2024-04-23 16:09:19.008464] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:30.175 16:09:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:30.175 00:10:30.175 SPDK Configuration: 00:10:30.175 Core mask: 0x1 00:10:30.175 00:10:30.175 Accel Perf Configuration: 00:10:30.175 Workload Type: decompress 00:10:30.175 Transfer size: 4096 bytes 00:10:30.175 Vector count 1 00:10:30.175 Module: iaa 00:10:30.175 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:30.175 Queue depth: 32 00:10:30.175 Allocate depth: 32 00:10:30.175 # threads/core: 1 00:10:30.175 Run time: 1 seconds 00:10:30.175 Verify: Yes 00:10:30.175 00:10:30.175 Running for 1 seconds... 00:10:30.175 00:10:30.175 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.175 ------------------------------------------------------------------------------------ 00:10:30.175 0,0 289056/s 655 MiB/s 0 0 00:10:30.175 ==================================================================================== 00:10:30.175 Total 289056/s 1129 MiB/s 0 0' 00:10:30.175 16:09:28 -- accel/accel.sh@20 -- # IFS=: 00:10:30.175 16:09:28 -- accel/accel.sh@20 -- # read -r var val 00:10:30.175 16:09:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:30.175 16:09:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:30.175 16:09:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.175 16:09:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.175 16:09:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:30.175 16:09:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:30.175 16:09:28 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:30.175 16:09:28 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:30.175 16:09:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.175 16:09:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.175 16:09:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.175 16:09:28 -- accel/accel.sh@42 -- # jq -r . 00:10:30.175 [2024-04-23 16:09:28.450410] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:30.175 [2024-04-23 16:09:28.450491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944649 ] 00:10:30.176 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.176 [2024-04-23 16:09:28.534812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.176 [2024-04-23 16:09:28.629360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.176 [2024-04-23 16:09:28.633935] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:30.176 [2024-04-23 16:09:28.641913] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=0x1 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=decompress 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=iaa 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@23 -- # accel_module=iaa 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=32 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=32 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=1 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val=Yes 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:36.746 16:09:35 -- accel/accel.sh@21 -- # val= 00:10:36.746 16:09:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # IFS=: 00:10:36.746 16:09:35 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@21 -- # val= 00:10:39.281 16:09:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # IFS=: 00:10:39.281 16:09:38 -- accel/accel.sh@20 -- # read -r var val 00:10:39.281 16:09:38 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:10:39.281 16:09:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:39.281 16:09:38 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:10:39.281 00:10:39.281 real 0m19.326s 00:10:39.281 user 0m6.523s 00:10:39.281 sys 0m0.430s 00:10:39.281 16:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.281 16:09:38 -- common/autotest_common.sh@10 -- # set +x 00:10:39.281 ************************************ 00:10:39.281 END TEST accel_decomp 00:10:39.281 ************************************ 00:10:39.281 16:09:38 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:39.281 16:09:38 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:39.281 16:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.281 16:09:38 -- common/autotest_common.sh@10 -- # set +x 00:10:39.281 ************************************ 00:10:39.281 START TEST accel_decmop_full 00:10:39.281 ************************************ 00:10:39.281 16:09:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:39.281 16:09:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.281 16:09:38 -- accel/accel.sh@17 -- # local accel_module 00:10:39.281 16:09:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:39.281 16:09:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:39.281 16:09:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.281 16:09:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.281 16:09:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:39.281 16:09:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:39.281 16:09:38 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:39.281 16:09:38 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:39.281 16:09:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.281 16:09:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.281 16:09:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.281 16:09:38 -- accel/accel.sh@42 -- # jq -r . 00:10:39.281 [2024-04-23 16:09:38.140379] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:39.281 [2024-04-23 16:09:38.140494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946677 ] 00:10:39.281 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.541 [2024-04-23 16:09:38.250305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.541 [2024-04-23 16:09:38.338481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.541 [2024-04-23 16:09:38.342976] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:39.541 [2024-04-23 16:09:38.350948] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:49.529 16:09:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:49.529 00:10:49.529 SPDK Configuration: 00:10:49.529 Core mask: 0x1 00:10:49.529 00:10:49.529 Accel Perf Configuration: 00:10:49.529 Workload Type: decompress 00:10:49.529 Transfer size: 111250 bytes 00:10:49.529 Vector count 1 00:10:49.529 Module: iaa 00:10:49.529 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:49.529 Queue depth: 32 00:10:49.529 Allocate depth: 32 00:10:49.529 # threads/core: 1 00:10:49.529 Run time: 1 seconds 00:10:49.529 Verify: Yes 00:10:49.529 00:10:49.529 Running for 1 seconds... 00:10:49.529 00:10:49.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.529 ------------------------------------------------------------------------------------ 00:10:49.529 0,0 107856/s 6080 MiB/s 0 0 00:10:49.529 ==================================================================================== 00:10:49.529 Total 107856/s 11443 MiB/s 0 0' 00:10:49.529 16:09:47 -- accel/accel.sh@20 -- # IFS=: 00:10:49.529 16:09:47 -- accel/accel.sh@20 -- # read -r var val 00:10:49.529 16:09:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:49.529 16:09:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:49.529 16:09:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.529 16:09:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.529 16:09:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:49.529 16:09:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:49.529 16:09:47 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:49.529 16:09:47 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:49.529 16:09:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.529 16:09:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.529 16:09:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.529 16:09:47 -- accel/accel.sh@42 -- # jq -r . 00:10:49.529 [2024-04-23 16:09:47.788084] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:49.529 [2024-04-23 16:09:47.788169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948544 ] 00:10:49.529 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.529 [2024-04-23 16:09:47.875286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.529 [2024-04-23 16:09:47.964724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.529 [2024-04-23 16:09:47.969260] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:49.529 [2024-04-23 16:09:47.977226] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=0x1 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=decompress 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=iaa 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@23 -- # accel_module=iaa 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=32 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=32 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=1 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val=Yes 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:56.107 16:09:54 -- accel/accel.sh@21 -- # val= 00:10:56.107 16:09:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # IFS=: 00:10:56.107 16:09:54 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@21 -- # val= 00:10:58.651 16:09:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # IFS=: 00:10:58.651 16:09:57 -- accel/accel.sh@20 -- # read -r var val 00:10:58.651 16:09:57 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:10:58.651 16:09:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:58.651 16:09:57 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:10:58.651 00:10:58.651 real 0m19.313s 00:10:58.651 user 0m6.541s 00:10:58.651 sys 0m0.418s 00:10:58.651 16:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.651 16:09:57 -- common/autotest_common.sh@10 -- # set +x 00:10:58.651 ************************************ 00:10:58.651 END TEST accel_decmop_full 00:10:58.651 ************************************ 00:10:58.652 16:09:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.652 16:09:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:58.652 16:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.652 16:09:57 -- common/autotest_common.sh@10 -- # set +x 00:10:58.652 ************************************ 00:10:58.652 START TEST accel_decomp_mcore 00:10:58.652 ************************************ 00:10:58.652 16:09:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.652 16:09:57 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.652 16:09:57 -- accel/accel.sh@17 -- # local accel_module 00:10:58.652 16:09:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.652 16:09:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:58.652 16:09:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.652 16:09:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.652 16:09:57 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:58.652 16:09:57 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:58.652 16:09:57 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:10:58.652 16:09:57 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:58.652 16:09:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.652 16:09:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.652 16:09:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.652 16:09:57 -- accel/accel.sh@42 -- # jq -r . 00:10:58.652 [2024-04-23 16:09:57.498382] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:10:58.652 [2024-04-23 16:09:57.498527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950514 ] 00:10:58.912 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.912 [2024-04-23 16:09:57.633421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.912 [2024-04-23 16:09:57.727901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.912 [2024-04-23 16:09:57.728005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.912 [2024-04-23 16:09:57.728102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.912 [2024-04-23 16:09:57.728114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.912 [2024-04-23 16:09:57.732739] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:58.912 [2024-04-23 16:09:57.740686] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:08.913 16:10:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:08.913 00:11:08.913 SPDK Configuration: 00:11:08.913 Core mask: 0xf 00:11:08.913 00:11:08.913 Accel Perf Configuration: 00:11:08.913 Workload Type: decompress 00:11:08.913 Transfer size: 4096 bytes 00:11:08.913 Vector count 1 00:11:08.913 Module: iaa 00:11:08.913 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:08.913 Queue depth: 32 00:11:08.913 Allocate depth: 32 00:11:08.913 # threads/core: 1 00:11:08.913 Run time: 1 seconds 00:11:08.913 Verify: Yes 00:11:08.913 00:11:08.913 Running for 1 seconds... 00:11:08.913 00:11:08.913 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.913 ------------------------------------------------------------------------------------ 00:11:08.913 0,0 110912/s 251 MiB/s 0 0 00:11:08.913 3,0 113440/s 257 MiB/s 0 0 00:11:08.913 2,0 112192/s 254 MiB/s 0 0 00:11:08.913 1,0 111856/s 253 MiB/s 0 0 00:11:08.913 ==================================================================================== 00:11:08.913 Total 448400/s 1751 MiB/s 0 0' 00:11:08.913 16:10:07 -- accel/accel.sh@20 -- # IFS=: 00:11:08.913 16:10:07 -- accel/accel.sh@20 -- # read -r var val 00:11:08.913 16:10:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:08.913 16:10:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:11:08.913 16:10:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.913 16:10:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.913 16:10:07 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:08.913 16:10:07 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:08.913 16:10:07 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:08.913 16:10:07 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:08.913 16:10:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.913 16:10:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.913 16:10:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.913 16:10:07 -- accel/accel.sh@42 -- # jq -r . 00:11:08.913 [2024-04-23 16:10:07.233289] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:08.913 [2024-04-23 16:10:07.233412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952435 ] 00:11:08.913 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.913 [2024-04-23 16:10:07.350488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.913 [2024-04-23 16:10:07.446602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.913 [2024-04-23 16:10:07.446724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.913 [2024-04-23 16:10:07.446762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.913 [2024-04-23 16:10:07.446774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.913 [2024-04-23 16:10:07.451363] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:08.913 [2024-04-23 16:10:07.459324] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=0xf 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=decompress 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=iaa 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=32 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=32 00:11:15.489 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.489 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.489 16:10:13 -- accel/accel.sh@21 -- # val=1 00:11:15.490 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.490 16:10:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:15.490 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.490 16:10:13 -- accel/accel.sh@21 -- # val=Yes 00:11:15.490 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.490 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.490 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:15.490 16:10:13 -- accel/accel.sh@21 -- # val= 00:11:15.490 16:10:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # IFS=: 00:11:15.490 16:10:13 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@21 -- # val= 00:11:18.025 16:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # IFS=: 00:11:18.025 16:10:16 -- accel/accel.sh@20 -- # read -r var val 00:11:18.025 16:10:16 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:18.025 16:10:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:18.025 16:10:16 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:18.025 00:11:18.025 real 0m19.450s 00:11:18.025 user 1m2.153s 00:11:18.025 sys 0m0.532s 00:11:18.025 16:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.025 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.025 ************************************ 00:11:18.025 END TEST accel_decomp_mcore 00:11:18.025 ************************************ 00:11:18.025 16:10:16 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.025 16:10:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:18.025 16:10:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:18.025 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:11:18.025 ************************************ 00:11:18.025 START TEST accel_decomp_full_mcore 00:11:18.025 ************************************ 00:11:18.025 16:10:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.025 16:10:16 -- accel/accel.sh@16 -- # local accel_opc 00:11:18.025 16:10:16 -- accel/accel.sh@17 -- # local accel_module 00:11:18.025 16:10:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.025 16:10:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:18.025 16:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.025 16:10:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.025 16:10:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:18.025 16:10:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:18.025 16:10:16 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:18.025 16:10:16 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:18.025 16:10:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.025 16:10:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.025 16:10:16 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.025 16:10:16 -- accel/accel.sh@42 -- # jq -r . 00:11:18.286 [2024-04-23 16:10:16.971153] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:18.286 [2024-04-23 16:10:16.971269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954259 ] 00:11:18.286 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.286 [2024-04-23 16:10:17.085611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.286 [2024-04-23 16:10:17.182648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.286 [2024-04-23 16:10:17.182749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.286 [2024-04-23 16:10:17.182778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.287 [2024-04-23 16:10:17.182790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.287 [2024-04-23 16:10:17.187402] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:18.287 [2024-04-23 16:10:17.195363] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:28.321 16:10:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:28.321 00:11:28.321 SPDK Configuration: 00:11:28.321 Core mask: 0xf 00:11:28.321 00:11:28.321 Accel Perf Configuration: 00:11:28.321 Workload Type: decompress 00:11:28.321 Transfer size: 111250 bytes 00:11:28.321 Vector count 1 00:11:28.321 Module: iaa 00:11:28.321 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:28.321 Queue depth: 32 00:11:28.321 Allocate depth: 32 00:11:28.321 # threads/core: 1 00:11:28.322 Run time: 1 seconds 00:11:28.322 Verify: Yes 00:11:28.322 00:11:28.322 Running for 1 seconds... 00:11:28.322 00:11:28.322 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:28.322 ------------------------------------------------------------------------------------ 00:11:28.322 0,0 82896/s 4673 MiB/s 0 0 00:11:28.322 3,0 85202/s 4803 MiB/s 0 0 00:11:28.322 2,0 85265/s 4806 MiB/s 0 0 00:11:28.322 1,0 84208/s 4747 MiB/s 0 0 00:11:28.322 ==================================================================================== 00:11:28.322 Total 337571/s 35815 MiB/s 0 0' 00:11:28.322 16:10:26 -- accel/accel.sh@20 -- # IFS=: 00:11:28.322 16:10:26 -- accel/accel.sh@20 -- # read -r var val 00:11:28.322 16:10:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:28.322 16:10:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:28.322 16:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.322 16:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.322 16:10:26 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:28.322 16:10:26 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:28.322 16:10:26 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:28.322 16:10:26 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:28.322 16:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.322 16:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.322 16:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.322 16:10:26 -- accel/accel.sh@42 -- # jq -r . 00:11:28.322 [2024-04-23 16:10:26.695416] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:28.322 [2024-04-23 16:10:26.695540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956098 ] 00:11:28.322 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.322 [2024-04-23 16:10:26.810652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.322 [2024-04-23 16:10:26.908810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.322 [2024-04-23 16:10:26.908914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.322 [2024-04-23 16:10:26.909016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.322 [2024-04-23 16:10:26.909025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.322 [2024-04-23 16:10:26.913567] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:28.322 [2024-04-23 16:10:26.921533] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val=0xf 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val=decompress 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.971 16:10:33 -- accel/accel.sh@21 -- # val=iaa 00:11:34.971 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.971 16:10:33 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.971 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val=32 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val=32 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val=1 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val=Yes 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:34.972 16:10:33 -- accel/accel.sh@21 -- # val= 00:11:34.972 16:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # IFS=: 00:11:34.972 16:10:33 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@21 -- # val= 00:11:37.517 16:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # IFS=: 00:11:37.517 16:10:36 -- accel/accel.sh@20 -- # read -r var val 00:11:37.517 16:10:36 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:37.517 16:10:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:37.517 16:10:36 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:37.517 00:11:37.517 real 0m19.438s 00:11:37.517 user 1m2.250s 00:11:37.517 sys 0m0.500s 00:11:37.517 16:10:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.517 16:10:36 -- common/autotest_common.sh@10 -- # set +x 00:11:37.517 ************************************ 00:11:37.517 END TEST accel_decomp_full_mcore 00:11:37.517 ************************************ 00:11:37.517 16:10:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:37.517 16:10:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:37.517 16:10:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.517 16:10:36 -- common/autotest_common.sh@10 -- # set +x 00:11:37.517 ************************************ 00:11:37.517 START TEST accel_decomp_mthread 00:11:37.517 ************************************ 00:11:37.517 16:10:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:37.517 16:10:36 -- accel/accel.sh@16 -- # local accel_opc 00:11:37.517 16:10:36 -- accel/accel.sh@17 -- # local accel_module 00:11:37.517 16:10:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:37.517 16:10:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:37.517 16:10:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:37.517 16:10:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:37.517 16:10:36 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:37.517 16:10:36 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:37.517 16:10:36 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:37.517 16:10:36 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:37.517 16:10:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:37.517 16:10:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:37.517 16:10:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:37.517 16:10:36 -- accel/accel.sh@42 -- # jq -r . 00:11:37.517 [2024-04-23 16:10:36.444624] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:37.517 [2024-04-23 16:10:36.444745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958166 ] 00:11:37.778 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.778 [2024-04-23 16:10:36.545549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.778 [2024-04-23 16:10:36.639296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.778 [2024-04-23 16:10:36.643850] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:37.778 [2024-04-23 16:10:36.651816] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:47.782 16:10:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:47.782 00:11:47.782 SPDK Configuration: 00:11:47.782 Core mask: 0x1 00:11:47.782 00:11:47.782 Accel Perf Configuration: 00:11:47.782 Workload Type: decompress 00:11:47.782 Transfer size: 4096 bytes 00:11:47.782 Vector count 1 00:11:47.782 Module: iaa 00:11:47.782 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:47.782 Queue depth: 32 00:11:47.782 Allocate depth: 32 00:11:47.782 # threads/core: 2 00:11:47.782 Run time: 1 seconds 00:11:47.782 Verify: Yes 00:11:47.782 00:11:47.782 Running for 1 seconds... 00:11:47.782 00:11:47.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:47.782 ------------------------------------------------------------------------------------ 00:11:47.782 0,1 147088/s 333 MiB/s 0 0 00:11:47.782 0,0 145488/s 330 MiB/s 0 0 00:11:47.782 ==================================================================================== 00:11:47.782 Total 292576/s 1142 MiB/s 0 0' 00:11:47.782 16:10:46 -- accel/accel.sh@20 -- # IFS=: 00:11:47.782 16:10:46 -- accel/accel.sh@20 -- # read -r var val 00:11:47.782 16:10:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:47.782 16:10:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:11:47.782 16:10:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:47.782 16:10:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:47.782 16:10:46 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:47.782 16:10:46 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:47.782 16:10:46 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:47.782 16:10:46 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:47.782 16:10:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:47.782 16:10:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:47.782 16:10:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:47.782 16:10:46 -- accel/accel.sh@42 -- # jq -r . 00:11:47.782 [2024-04-23 16:10:46.126485] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:47.782 [2024-04-23 16:10:46.126609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959961 ] 00:11:47.782 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.782 [2024-04-23 16:10:46.241747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.782 [2024-04-23 16:10:46.340962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.782 [2024-04-23 16:10:46.345536] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:47.782 [2024-04-23 16:10:46.353503] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=0x1 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=decompress 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=iaa 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@23 -- # accel_module=iaa 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=32 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=32 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=2 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val=Yes 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:54.401 16:10:52 -- accel/accel.sh@21 -- # val= 00:11:54.401 16:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # IFS=: 00:11:54.401 16:10:52 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@21 -- # val= 00:11:56.950 16:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # IFS=: 00:11:56.950 16:10:55 -- accel/accel.sh@20 -- # read -r var val 00:11:56.950 16:10:55 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:11:56.950 16:10:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:56.950 16:10:55 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:11:56.950 00:11:56.950 real 0m19.402s 00:11:56.950 user 0m6.561s 00:11:56.950 sys 0m0.466s 00:11:56.950 16:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.950 16:10:55 -- common/autotest_common.sh@10 -- # set +x 00:11:56.950 ************************************ 00:11:56.950 END TEST accel_decomp_mthread 00:11:56.950 ************************************ 00:11:56.950 16:10:55 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:56.950 16:10:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:56.950 16:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.950 16:10:55 -- common/autotest_common.sh@10 -- # set +x 00:11:56.950 ************************************ 00:11:56.950 START TEST accel_deomp_full_mthread 00:11:56.950 ************************************ 00:11:56.950 16:10:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:56.950 16:10:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:56.950 16:10:55 -- accel/accel.sh@17 -- # local accel_module 00:11:56.950 16:10:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:56.950 16:10:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:11:56.950 16:10:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:56.950 16:10:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:56.950 16:10:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:56.950 16:10:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:56.950 16:10:55 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:11:56.950 16:10:55 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:56.950 16:10:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:56.950 16:10:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:56.950 16:10:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:56.950 16:10:55 -- accel/accel.sh@42 -- # jq -r . 00:11:56.950 [2024-04-23 16:10:55.877927] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:11:56.950 [2024-04-23 16:10:55.878053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961843 ] 00:11:57.212 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.212 [2024-04-23 16:10:55.995727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.212 [2024-04-23 16:10:56.094764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.212 [2024-04-23 16:10:56.099360] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:57.212 [2024-04-23 16:10:56.107327] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:07.209 16:11:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:07.209 00:12:07.209 SPDK Configuration: 00:12:07.209 Core mask: 0x1 00:12:07.209 00:12:07.209 Accel Perf Configuration: 00:12:07.209 Workload Type: decompress 00:12:07.209 Transfer size: 111250 bytes 00:12:07.209 Vector count 1 00:12:07.209 Module: iaa 00:12:07.209 File Name: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:07.209 Queue depth: 32 00:12:07.209 Allocate depth: 32 00:12:07.209 # threads/core: 2 00:12:07.209 Run time: 1 seconds 00:12:07.209 Verify: Yes 00:12:07.209 00:12:07.209 Running for 1 seconds... 00:12:07.209 00:12:07.209 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:07.209 ------------------------------------------------------------------------------------ 00:12:07.209 0,1 62384/s 3516 MiB/s 0 0 00:12:07.209 0,0 61936/s 3491 MiB/s 0 0 00:12:07.209 ==================================================================================== 00:12:07.209 Total 124320/s 13189 MiB/s 0 0' 00:12:07.209 16:11:05 -- accel/accel.sh@20 -- # IFS=: 00:12:07.209 16:11:05 -- accel/accel.sh@20 -- # read -r var val 00:12:07.209 16:11:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.209 16:11:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:12:07.209 16:11:05 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.209 16:11:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:07.209 16:11:05 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:07.209 16:11:05 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:07.209 16:11:05 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:07.209 16:11:05 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:07.209 16:11:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:07.209 16:11:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:07.209 16:11:05 -- accel/accel.sh@41 -- # local IFS=, 00:12:07.209 16:11:05 -- accel/accel.sh@42 -- # jq -r . 00:12:07.209 [2024-04-23 16:11:05.597880] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:07.209 [2024-04-23 16:11:05.598003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963856 ] 00:12:07.209 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.209 [2024-04-23 16:11:05.713019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.209 [2024-04-23 16:11:05.808045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.209 [2024-04-23 16:11:05.812558] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:07.209 [2024-04-23 16:11:05.820528] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:13.796 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.796 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.796 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.796 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.796 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.796 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.796 16:11:12 -- accel/accel.sh@21 -- # val=0x1 00:12:13.796 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.796 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=decompress 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=iaa 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@23 -- # accel_module=iaa 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=32 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=32 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=2 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val=Yes 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:13.797 16:11:12 -- accel/accel.sh@21 -- # val= 00:12:13.797 16:11:12 -- accel/accel.sh@22 -- # case "$var" in 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # IFS=: 00:12:13.797 16:11:12 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@21 -- # val= 00:12:16.336 16:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # IFS=: 00:12:16.336 16:11:15 -- accel/accel.sh@20 -- # read -r var val 00:12:16.336 16:11:15 -- accel/accel.sh@28 -- # [[ -n iaa ]] 00:12:16.336 16:11:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:16.336 16:11:15 -- accel/accel.sh@28 -- # [[ iaa == \i\a\a ]] 00:12:16.336 00:12:16.336 real 0m19.423s 00:12:16.336 user 0m6.550s 00:12:16.336 sys 0m0.507s 00:12:16.336 16:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.336 16:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:16.336 ************************************ 00:12:16.336 END TEST accel_deomp_full_mthread 00:12:16.336 ************************************ 00:12:16.597 16:11:15 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:16.597 16:11:15 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:16.597 16:11:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:16.597 16:11:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.597 16:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:16.597 16:11:15 -- accel/accel.sh@129 -- # build_accel_config 00:12:16.597 16:11:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:16.597 16:11:15 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:12:16.597 16:11:15 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:12:16.597 16:11:15 -- accel/accel.sh@34 -- # [[ 1 -gt 0 ]] 00:12:16.597 16:11:15 -- accel/accel.sh@34 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:12:16.597 16:11:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:16.597 16:11:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:16.598 16:11:15 -- accel/accel.sh@41 -- # local IFS=, 00:12:16.598 16:11:15 -- accel/accel.sh@42 -- # jq -r . 00:12:16.598 ************************************ 00:12:16.598 START TEST accel_dif_functional_tests 00:12:16.598 ************************************ 00:12:16.598 16:11:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:16.598 [2024-04-23 16:11:15.361989] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:16.598 [2024-04-23 16:11:15.362098] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2965671 ] 00:12:16.598 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.598 [2024-04-23 16:11:15.474383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.859 [2024-04-23 16:11:15.572367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.859 [2024-04-23 16:11:15.572458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.859 [2024-04-23 16:11:15.572464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.859 [2024-04-23 16:11:15.577025] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:16.859 [2024-04-23 16:11:15.584995] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:24.994 00:12:24.994 00:12:24.994 CUnit - A unit testing framework for C - Version 2.1-3 00:12:24.994 http://cunit.sourceforge.net/ 00:12:24.994 00:12:24.994 00:12:24.994 Suite: accel_dif 00:12:24.994 Test: verify: DIF generated, GUARD check ...passed 00:12:24.994 Test: verify: DIF generated, APPTAG check ...passed 00:12:24.994 Test: verify: DIF generated, REFTAG check ...passed 00:12:24.994 Test: verify: DIF not generated, GUARD check ...[2024-04-23 16:11:22.502718] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.994 [2024-04-23 16:11:22.502767] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.502779] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502789] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502796] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502803] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502809] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.994 [2024-04-23 16:11:22.502818] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.994 [2024-04-23 16:11:22.502826] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.994 [2024-04-23 16:11:22.502847] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:24.994 [2024-04-23 16:11:22.502856] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:12:24.994 [2024-04-23 16:11:22.502876] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:24.994 passed 00:12:24.994 Test: verify: DIF not generated, APPTAG check ...[2024-04-23 16:11:22.502935] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.994 [2024-04-23 16:11:22.502945] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.502956] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502963] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502971] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502979] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.502986] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.994 [2024-04-23 16:11:22.502992] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.994 [2024-04-23 16:11:22.503000] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.994 [2024-04-23 16:11:22.503008] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:24.994 [2024-04-23 16:11:22.503017] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:24.994 [2024-04-23 16:11:22.503036] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:24.994 passed 00:12:24.994 Test: verify: DIF not generated, REFTAG check ...[2024-04-23 16:11:22.503071] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.994 [2024-04-23 16:11:22.503084] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.503091] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503099] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503105] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503113] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503120] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.994 [2024-04-23 16:11:22.503133] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.994 [2024-04-23 16:11:22.503140] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.994 [2024-04-23 16:11:22.503166] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:24.994 [2024-04-23 16:11:22.503173] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:24.994 [2024-04-23 16:11:22.503190] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:24.994 passed 00:12:24.994 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:24.994 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-23 16:11:22.503275] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.994 [2024-04-23 16:11:22.503285] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.503293] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503300] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503308] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503316] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503324] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.994 [2024-04-23 16:11:22.503331] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.994 [2024-04-23 16:11:22.503341] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.994 [2024-04-23 16:11:22.503350] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:24.994 [2024-04-23 16:11:22.503359] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:12:24.994 passed 00:12:24.994 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:24.994 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:24.994 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:24.994 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-23 16:11:22.503525] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.994 [2024-04-23 16:11:22.503537] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.503544] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.994 [2024-04-23 16:11:22.503552] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503558] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503566] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503576] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.995 [2024-04-23 16:11:22.503584] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.995 [2024-04-23 16:11:22.503590] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.995 [2024-04-23 16:11:22.503598] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:12:24.995 [2024-04-23 16:11:22.503604] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 16:11:22.503612] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503618] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503625] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503638] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503645] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.995 [2024-04-23 16:11:22.503651] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.995 [2024-04-23 16:11:22.503661] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.995 [2024-04-23 16:11:22.503668] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:24.995 [2024-04-23 16:11:22.503678] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:12:24.995 [2024-04-23 16:11:22.503687] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:12:24.995 [2024-04-23 16:11:22.503695] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:passed 00:12:24.995 Test: generate copy: DIF generated, GUARD check ...[2024-04-23 16:11:22.503703] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503713] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503722] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503730] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:12:24.995 [2024-04-23 16:11:22.503737] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:12:24.995 [2024-04-23 16:11:22.503744] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:12:24.995 [2024-04-23 16:11:22.503751] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:12:24.995 passed 00:12:24.995 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:24.995 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:24.995 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-23 16:11:22.503889] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:12:24.995 passed 00:12:24.995 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-23 16:11:22.503928] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:12:24.995 passed 00:12:24.995 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-23 16:11:22.503965] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:12:24.995 passed 00:12:24.995 Test: generate copy: iovecs-len validate ...[2024-04-23 16:11:22.504004] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:12:24.995 passed 00:12:24.995 Test: generate copy: buffer alignment validate ...passed 00:12:24.995 00:12:24.995 Run Summary: Type Total Ran Passed Failed Inactive 00:12:24.995 suites 1 1 n/a 0 0 00:12:24.995 tests 20 20 20 0 0 00:12:24.995 asserts 204 204 204 0 n/a 00:12:24.995 00:12:24.995 Elapsed time = 0.005 seconds 00:12:25.932 00:12:25.932 real 0m9.527s 00:12:25.932 user 0m20.124s 00:12:25.932 sys 0m0.264s 00:12:25.932 16:11:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.933 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:12:25.933 ************************************ 00:12:25.933 END TEST accel_dif_functional_tests 00:12:25.933 ************************************ 00:12:25.933 00:12:25.933 real 7m5.875s 00:12:25.933 user 4m32.263s 00:12:25.933 sys 0m11.492s 00:12:25.933 16:11:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.933 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:12:25.933 ************************************ 00:12:25.933 END TEST accel 00:12:25.933 ************************************ 00:12:26.194 16:11:24 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:26.194 16:11:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:26.194 16:11:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.194 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.194 ************************************ 00:12:26.194 START TEST accel_rpc 00:12:26.194 ************************************ 00:12:26.194 16:11:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:12:26.194 * Looking for test storage... 00:12:26.194 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:12:26.194 16:11:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:26.194 16:11:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2967532 00:12:26.194 16:11:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 2967532 00:12:26.194 16:11:24 -- common/autotest_common.sh@819 -- # '[' -z 2967532 ']' 00:12:26.194 16:11:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.194 16:11:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.194 16:11:24 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:26.194 16:11:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.194 16:11:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.194 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:12:26.194 [2024-04-23 16:11:25.066453] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:26.194 [2024-04-23 16:11:25.066611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967532 ] 00:12:26.455 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.455 [2024-04-23 16:11:25.202321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.455 [2024-04-23 16:11:25.293329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.455 [2024-04-23 16:11:25.293538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.025 16:11:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.025 16:11:25 -- common/autotest_common.sh@852 -- # return 0 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:12:27.025 16:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:27.025 16:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 ************************************ 00:12:27.025 START TEST accel_scan_dsa_modules 00:12:27.025 ************************************ 00:12:27.025 16:11:25 -- common/autotest_common.sh@1104 -- # accel_scan_dsa_modules_test_suite 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 [2024-04-23 16:11:25.734023] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@640 -- # local es=0 00:12:27.025 16:11:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.025 16:11:25 -- common/autotest_common.sh@643 -- # rpc_cmd dsa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 request: 00:12:27.025 { 00:12:27.025 "method": "dsa_scan_accel_module", 00:12:27.025 "req_id": 1 00:12:27.025 } 00:12:27.025 Got JSON-RPC error response 00:12:27.025 response: 00:12:27.025 { 00:12:27.025 "code": -114, 00:12:27.025 "message": "Operation already in progress" 00:12:27.025 } 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:27.025 16:11:25 -- common/autotest_common.sh@643 -- # es=1 00:12:27.025 16:11:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:27.025 16:11:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:27.025 16:11:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:27.025 00:12:27.025 real 0m0.017s 00:12:27.025 user 0m0.003s 00:12:27.025 sys 0m0.001s 00:12:27.025 16:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 ************************************ 00:12:27.025 END TEST accel_scan_dsa_modules 00:12:27.025 ************************************ 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:12:27.025 16:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:27.025 16:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 ************************************ 00:12:27.025 START TEST accel_scan_iaa_modules 00:12:27.025 ************************************ 00:12:27.025 16:11:25 -- common/autotest_common.sh@1104 -- # accel_scan_iaa_modules_test_suite 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 [2024-04-23 16:11:25.778024] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@640 -- # local es=0 00:12:27.025 16:11:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:12:27.025 16:11:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:27.025 16:11:25 -- common/autotest_common.sh@643 -- # rpc_cmd iaa_scan_accel_module 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 request: 00:12:27.025 { 00:12:27.025 "method": "iaa_scan_accel_module", 00:12:27.025 "req_id": 1 00:12:27.025 } 00:12:27.025 Got JSON-RPC error response 00:12:27.025 response: 00:12:27.025 { 00:12:27.025 "code": -114, 00:12:27.025 "message": "Operation already in progress" 00:12:27.025 } 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:12:27.025 16:11:25 -- common/autotest_common.sh@643 -- # es=1 00:12:27.025 16:11:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:27.025 16:11:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:27.025 16:11:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:27.025 00:12:27.025 real 0m0.014s 00:12:27.025 user 0m0.001s 00:12:27.025 sys 0m0.002s 00:12:27.025 16:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 ************************************ 00:12:27.025 END TEST accel_scan_iaa_modules 00:12:27.025 ************************************ 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:27.025 16:11:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:27.025 16:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 ************************************ 00:12:27.025 START TEST accel_assign_opcode 00:12:27.025 ************************************ 00:12:27.025 16:11:25 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 [2024-04-23 16:11:25.826063] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 [2024-04-23 16:11:25.834036] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:27.025 16:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.025 16:11:25 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:27.025 16:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.025 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:35.176 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.176 16:11:33 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:35.176 16:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.176 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.176 16:11:33 -- accel/accel_rpc.sh@42 -- # grep software 00:12:35.176 16:11:33 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:35.176 16:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.176 software 00:12:35.176 00:12:35.176 real 0m7.222s 00:12:35.176 user 0m0.030s 00:12:35.176 sys 0m0.007s 00:12:35.176 16:11:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.176 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.176 ************************************ 00:12:35.176 END TEST accel_assign_opcode 00:12:35.176 ************************************ 00:12:35.176 16:11:33 -- accel/accel_rpc.sh@55 -- # killprocess 2967532 00:12:35.176 16:11:33 -- common/autotest_common.sh@926 -- # '[' -z 2967532 ']' 00:12:35.176 16:11:33 -- common/autotest_common.sh@930 -- # kill -0 2967532 00:12:35.176 16:11:33 -- common/autotest_common.sh@931 -- # uname 00:12:35.176 16:11:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:35.176 16:11:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2967532 00:12:35.176 16:11:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:35.176 16:11:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:35.176 16:11:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2967532' 00:12:35.177 killing process with pid 2967532 00:12:35.177 16:11:33 -- common/autotest_common.sh@945 -- # kill 2967532 00:12:35.177 16:11:33 -- common/autotest_common.sh@950 -- # wait 2967532 00:12:37.092 00:12:37.092 real 0m11.013s 00:12:37.092 user 0m3.903s 00:12:37.092 sys 0m0.689s 00:12:37.092 16:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.092 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:37.092 ************************************ 00:12:37.092 END TEST accel_rpc 00:12:37.092 ************************************ 00:12:37.092 16:11:35 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:12:37.092 16:11:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:37.092 16:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.092 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:37.092 ************************************ 00:12:37.092 START TEST app_cmdline 00:12:37.092 ************************************ 00:12:37.092 16:11:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:12:37.092 * Looking for test storage... 00:12:37.351 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:12:37.351 16:11:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:37.351 16:11:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2969836 00:12:37.351 16:11:36 -- app/cmdline.sh@18 -- # waitforlisten 2969836 00:12:37.351 16:11:36 -- common/autotest_common.sh@819 -- # '[' -z 2969836 ']' 00:12:37.351 16:11:36 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:37.351 16:11:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.351 16:11:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.351 16:11:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.351 16:11:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.351 16:11:36 -- common/autotest_common.sh@10 -- # set +x 00:12:37.351 [2024-04-23 16:11:36.114435] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:12:37.351 [2024-04-23 16:11:36.114564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969836 ] 00:12:37.351 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.351 [2024-04-23 16:11:36.234700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.609 [2024-04-23 16:11:36.330387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.609 [2024-04-23 16:11:36.330566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.181 16:11:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:38.181 16:11:36 -- common/autotest_common.sh@852 -- # return 0 00:12:38.181 16:11:36 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:12:38.181 { 00:12:38.181 "version": "SPDK v24.01.1-pre git sha1 36faa8c312b", 00:12:38.181 "fields": { 00:12:38.181 "major": 24, 00:12:38.181 "minor": 1, 00:12:38.181 "patch": 1, 00:12:38.181 "suffix": "-pre", 00:12:38.181 "commit": "36faa8c312b" 00:12:38.181 } 00:12:38.181 } 00:12:38.181 16:11:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:38.181 16:11:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:38.181 16:11:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:38.181 16:11:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:38.181 16:11:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:38.181 16:11:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.181 16:11:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:38.181 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:12:38.181 16:11:37 -- app/cmdline.sh@26 -- # sort 00:12:38.181 16:11:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.181 16:11:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:38.181 16:11:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:38.181 16:11:37 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:38.181 16:11:37 -- common/autotest_common.sh@640 -- # local es=0 00:12:38.181 16:11:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:38.181 16:11:37 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:38.181 16:11:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:38.181 16:11:37 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:38.181 16:11:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:38.181 16:11:37 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:38.181 16:11:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:38.181 16:11:37 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:38.181 16:11:37 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:12:38.181 16:11:37 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:38.443 request: 00:12:38.443 { 00:12:38.443 "method": "env_dpdk_get_mem_stats", 00:12:38.443 "req_id": 1 00:12:38.443 } 00:12:38.443 Got JSON-RPC error response 00:12:38.443 response: 00:12:38.443 { 00:12:38.443 "code": -32601, 00:12:38.443 "message": "Method not found" 00:12:38.443 } 00:12:38.443 16:11:37 -- common/autotest_common.sh@643 -- # es=1 00:12:38.443 16:11:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:38.443 16:11:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:38.443 16:11:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:38.443 16:11:37 -- app/cmdline.sh@1 -- # killprocess 2969836 00:12:38.443 16:11:37 -- common/autotest_common.sh@926 -- # '[' -z 2969836 ']' 00:12:38.443 16:11:37 -- common/autotest_common.sh@930 -- # kill -0 2969836 00:12:38.443 16:11:37 -- common/autotest_common.sh@931 -- # uname 00:12:38.443 16:11:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.443 16:11:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2969836 00:12:38.443 16:11:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:38.443 16:11:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:38.443 16:11:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2969836' 00:12:38.443 killing process with pid 2969836 00:12:38.443 16:11:37 -- common/autotest_common.sh@945 -- # kill 2969836 00:12:38.443 16:11:37 -- common/autotest_common.sh@950 -- # wait 2969836 00:12:39.386 00:12:39.386 real 0m2.205s 00:12:39.386 user 0m2.394s 00:12:39.386 sys 0m0.526s 00:12:39.386 16:11:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.386 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 ************************************ 00:12:39.386 END TEST app_cmdline 00:12:39.386 ************************************ 00:12:39.386 16:11:38 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:12:39.386 16:11:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:39.386 16:11:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.386 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.386 ************************************ 00:12:39.386 START TEST version 00:12:39.386 ************************************ 00:12:39.386 16:11:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:12:39.386 * Looking for test storage... 00:12:39.386 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:12:39.386 16:11:38 -- app/version.sh@17 -- # get_header_version major 00:12:39.386 16:11:38 -- app/version.sh@14 -- # cut -f2 00:12:39.386 16:11:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:39.386 16:11:38 -- app/version.sh@14 -- # tr -d '"' 00:12:39.386 16:11:38 -- app/version.sh@17 -- # major=24 00:12:39.386 16:11:38 -- app/version.sh@18 -- # get_header_version minor 00:12:39.386 16:11:38 -- app/version.sh@14 -- # tr -d '"' 00:12:39.386 16:11:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:39.386 16:11:38 -- app/version.sh@14 -- # cut -f2 00:12:39.386 16:11:38 -- app/version.sh@18 -- # minor=1 00:12:39.386 16:11:38 -- app/version.sh@19 -- # get_header_version patch 00:12:39.386 16:11:38 -- app/version.sh@14 -- # cut -f2 00:12:39.386 16:11:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:39.386 16:11:38 -- app/version.sh@14 -- # tr -d '"' 00:12:39.386 16:11:38 -- app/version.sh@19 -- # patch=1 00:12:39.386 16:11:38 -- app/version.sh@20 -- # get_header_version suffix 00:12:39.386 16:11:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:12:39.386 16:11:38 -- app/version.sh@14 -- # cut -f2 00:12:39.386 16:11:38 -- app/version.sh@14 -- # tr -d '"' 00:12:39.386 16:11:38 -- app/version.sh@20 -- # suffix=-pre 00:12:39.386 16:11:38 -- app/version.sh@22 -- # version=24.1 00:12:39.386 16:11:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:39.386 16:11:38 -- app/version.sh@25 -- # version=24.1.1 00:12:39.386 16:11:38 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:39.386 16:11:38 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:39.386 16:11:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:39.647 16:11:38 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:39.647 16:11:38 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:39.647 00:12:39.647 real 0m0.134s 00:12:39.647 user 0m0.046s 00:12:39.647 sys 0m0.118s 00:12:39.647 16:11:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.647 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.647 ************************************ 00:12:39.647 END TEST version 00:12:39.647 ************************************ 00:12:39.647 16:11:38 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:12:39.647 16:11:38 -- spdk/autotest.sh@204 -- # uname -s 00:12:39.647 16:11:38 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:12:39.647 16:11:38 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:12:39.647 16:11:38 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:12:39.647 16:11:38 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:12:39.647 16:11:38 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:12:39.647 16:11:38 -- spdk/autotest.sh@268 -- # timing_exit lib 00:12:39.647 16:11:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.647 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.647 16:11:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:39.647 16:11:38 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:12:39.648 16:11:38 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:12:39.648 16:11:38 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:12:39.648 16:11:38 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:12:39.648 16:11:38 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:12:39.648 16:11:38 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:39.648 16:11:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:39.648 16:11:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.648 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.648 ************************************ 00:12:39.648 START TEST nvmf_tcp 00:12:39.648 ************************************ 00:12:39.648 16:11:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:39.648 * Looking for test storage... 00:12:39.648 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.648 16:11:38 -- nvmf/common.sh@7 -- # uname -s 00:12:39.648 16:11:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.648 16:11:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.648 16:11:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.648 16:11:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.648 16:11:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.648 16:11:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.648 16:11:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.648 16:11:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.648 16:11:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.648 16:11:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.648 16:11:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:39.648 16:11:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:39.648 16:11:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.648 16:11:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.648 16:11:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:39.648 16:11:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:39.648 16:11:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.648 16:11:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.648 16:11:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.648 16:11:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.648 16:11:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.648 16:11:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.648 16:11:38 -- paths/export.sh@5 -- # export PATH 00:12:39.648 16:11:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.648 16:11:38 -- nvmf/common.sh@46 -- # : 0 00:12:39.648 16:11:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:39.648 16:11:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:39.648 16:11:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:39.648 16:11:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.648 16:11:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.648 16:11:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:39.648 16:11:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:39.648 16:11:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:12:39.648 16:11:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.648 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:12:39.648 16:11:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:39.648 16:11:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:39.648 16:11:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.648 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.648 ************************************ 00:12:39.648 START TEST nvmf_example 00:12:39.648 ************************************ 00:12:39.648 16:11:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:39.910 * Looking for test storage... 00:12:39.910 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:39.910 16:11:38 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.910 16:11:38 -- nvmf/common.sh@7 -- # uname -s 00:12:39.910 16:11:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.910 16:11:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.910 16:11:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.910 16:11:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.910 16:11:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.910 16:11:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.910 16:11:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.910 16:11:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.910 16:11:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.910 16:11:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.910 16:11:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:39.910 16:11:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:39.910 16:11:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.910 16:11:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.910 16:11:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:39.910 16:11:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:39.910 16:11:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.910 16:11:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.910 16:11:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.910 16:11:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.910 16:11:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.910 16:11:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.910 16:11:38 -- paths/export.sh@5 -- # export PATH 00:12:39.910 16:11:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.910 16:11:38 -- nvmf/common.sh@46 -- # : 0 00:12:39.910 16:11:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:39.910 16:11:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:39.911 16:11:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:39.911 16:11:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.911 16:11:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.911 16:11:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:39.911 16:11:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:39.911 16:11:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:39.911 16:11:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:39.911 16:11:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:39.911 16:11:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:39.911 16:11:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:39.911 16:11:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:39.911 16:11:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:39.911 16:11:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:39.911 16:11:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:39.911 16:11:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.911 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:39.911 16:11:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:39.911 16:11:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:39.911 16:11:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.911 16:11:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:39.911 16:11:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:39.911 16:11:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:39.911 16:11:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.911 16:11:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.911 16:11:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.911 16:11:38 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:12:39.911 16:11:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:39.911 16:11:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:39.911 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:45.193 16:11:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:45.193 16:11:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:45.193 16:11:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:45.193 16:11:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:45.193 16:11:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:45.193 16:11:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:45.193 16:11:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:45.193 16:11:44 -- nvmf/common.sh@294 -- # net_devs=() 00:12:45.193 16:11:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:45.193 16:11:44 -- nvmf/common.sh@295 -- # e810=() 00:12:45.193 16:11:44 -- nvmf/common.sh@295 -- # local -ga e810 00:12:45.193 16:11:44 -- nvmf/common.sh@296 -- # x722=() 00:12:45.193 16:11:44 -- nvmf/common.sh@296 -- # local -ga x722 00:12:45.193 16:11:44 -- nvmf/common.sh@297 -- # mlx=() 00:12:45.193 16:11:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:45.193 16:11:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.193 16:11:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:45.193 16:11:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:45.193 16:11:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:45.193 16:11:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:45.193 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:45.193 16:11:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:45.193 16:11:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:45.193 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:45.193 16:11:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:45.193 16:11:44 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:12:45.193 16:11:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:45.193 16:11:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.193 16:11:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:45.193 16:11:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.193 16:11:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:45.193 Found net devices under 0000:27:00.0: cvl_0_0 00:12:45.193 16:11:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.193 16:11:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:45.193 16:11:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.454 16:11:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:45.454 16:11:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.454 16:11:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:45.454 Found net devices under 0000:27:00.1: cvl_0_1 00:12:45.454 16:11:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.454 16:11:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:45.454 16:11:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:45.454 16:11:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:45.454 16:11:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:45.454 16:11:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:45.454 16:11:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.454 16:11:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.454 16:11:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.454 16:11:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:45.454 16:11:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.454 16:11:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.454 16:11:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:45.454 16:11:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.454 16:11:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.454 16:11:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:45.454 16:11:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:45.454 16:11:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.454 16:11:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.454 16:11:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.454 16:11:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.454 16:11:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:45.454 16:11:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.716 16:11:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.716 16:11:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.716 16:11:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:45.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:12:45.716 00:12:45.716 --- 10.0.0.2 ping statistics --- 00:12:45.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.716 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:12:45.716 16:11:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:45.716 00:12:45.716 --- 10.0.0.1 ping statistics --- 00:12:45.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.716 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:45.716 16:11:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.716 16:11:44 -- nvmf/common.sh@410 -- # return 0 00:12:45.716 16:11:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:45.716 16:11:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.716 16:11:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:45.716 16:11:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:45.716 16:11:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.716 16:11:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:45.716 16:11:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:45.716 16:11:44 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:45.716 16:11:44 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:45.716 16:11:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:45.716 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.716 16:11:44 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:45.716 16:11:44 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:45.716 16:11:44 -- target/nvmf_example.sh@34 -- # nvmfpid=2974003 00:12:45.716 16:11:44 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.716 16:11:44 -- target/nvmf_example.sh@36 -- # waitforlisten 2974003 00:12:45.716 16:11:44 -- common/autotest_common.sh@819 -- # '[' -z 2974003 ']' 00:12:45.716 16:11:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.716 16:11:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.716 16:11:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.716 16:11:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.716 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:12:45.716 16:11:44 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:45.978 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.549 16:11:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:46.549 16:11:45 -- common/autotest_common.sh@852 -- # return 0 00:12:46.549 16:11:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:46.549 16:11:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.549 16:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.549 16:11:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:46.549 16:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.549 16:11:45 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:46.549 16:11:45 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:46.549 16:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.549 16:11:45 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:46.549 16:11:45 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.549 16:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.549 16:11:45 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.549 16:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.549 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:12:46.549 16:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.549 16:11:45 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:46.549 16:11:45 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:46.810 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.868 Initializing NVMe Controllers 00:12:56.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:56.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:56.868 Initialization complete. Launching workers. 00:12:56.868 ======================================================== 00:12:56.868 Latency(us) 00:12:56.868 Device Information : IOPS MiB/s Average min max 00:12:56.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18753.91 73.26 3412.27 685.75 15983.88 00:12:56.868 ======================================================== 00:12:56.868 Total : 18753.91 73.26 3412.27 685.75 15983.88 00:12:56.868 00:12:56.868 16:11:55 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:56.868 16:11:55 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:56.868 16:11:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.868 16:11:55 -- nvmf/common.sh@116 -- # sync 00:12:56.868 16:11:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.868 16:11:55 -- nvmf/common.sh@119 -- # set +e 00:12:56.868 16:11:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.868 16:11:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.130 rmmod nvme_tcp 00:12:57.130 rmmod nvme_fabrics 00:12:57.130 rmmod nvme_keyring 00:12:57.130 16:11:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.130 16:11:55 -- nvmf/common.sh@123 -- # set -e 00:12:57.130 16:11:55 -- nvmf/common.sh@124 -- # return 0 00:12:57.130 16:11:55 -- nvmf/common.sh@477 -- # '[' -n 2974003 ']' 00:12:57.130 16:11:55 -- nvmf/common.sh@478 -- # killprocess 2974003 00:12:57.130 16:11:55 -- common/autotest_common.sh@926 -- # '[' -z 2974003 ']' 00:12:57.130 16:11:55 -- common/autotest_common.sh@930 -- # kill -0 2974003 00:12:57.130 16:11:55 -- common/autotest_common.sh@931 -- # uname 00:12:57.130 16:11:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:57.130 16:11:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2974003 00:12:57.130 16:11:55 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:12:57.130 16:11:55 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:12:57.130 16:11:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2974003' 00:12:57.130 killing process with pid 2974003 00:12:57.130 16:11:55 -- common/autotest_common.sh@945 -- # kill 2974003 00:12:57.130 16:11:55 -- common/autotest_common.sh@950 -- # wait 2974003 00:12:57.700 nvmf threads initialize successfully 00:12:57.700 bdev subsystem init successfully 00:12:57.700 created a nvmf target service 00:12:57.700 create targets's poll groups done 00:12:57.700 all subsystems of target started 00:12:57.700 nvmf target is running 00:12:57.700 all subsystems of target stopped 00:12:57.700 destroy targets's poll groups done 00:12:57.700 destroyed the nvmf target service 00:12:57.700 bdev subsystem finish successfully 00:12:57.700 nvmf threads destroy successfully 00:12:57.700 16:11:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:57.700 16:11:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:57.700 16:11:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:57.700 16:11:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.700 16:11:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:57.700 16:11:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.700 16:11:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.700 16:11:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.615 16:11:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:59.615 16:11:58 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:59.615 16:11:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:59.615 16:11:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 00:12:59.615 real 0m19.951s 00:12:59.615 user 0m46.771s 00:12:59.615 sys 0m5.574s 00:12:59.615 16:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.615 16:11:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 ************************************ 00:12:59.615 END TEST nvmf_example 00:12:59.615 ************************************ 00:12:59.615 16:11:58 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:59.615 16:11:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:59.615 16:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.615 16:11:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.615 ************************************ 00:12:59.615 START TEST nvmf_filesystem 00:12:59.616 ************************************ 00:12:59.616 16:11:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:59.877 * Looking for test storage... 00:12:59.877 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.877 16:11:58 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:12:59.877 16:11:58 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:59.877 16:11:58 -- common/autotest_common.sh@34 -- # set -e 00:12:59.877 16:11:58 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:59.877 16:11:58 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:59.877 16:11:58 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:59.878 16:11:58 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:12:59.878 16:11:58 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:59.878 16:11:58 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:59.878 16:11:58 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:59.878 16:11:58 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:59.878 16:11:58 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:59.878 16:11:58 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:59.878 16:11:58 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:59.878 16:11:58 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:59.878 16:11:58 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:59.878 16:11:58 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:59.878 16:11:58 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:59.878 16:11:58 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:59.878 16:11:58 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:59.878 16:11:58 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:59.878 16:11:58 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:59.878 16:11:58 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:59.878 16:11:58 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:12:59.878 16:11:58 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:59.878 16:11:58 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:59.878 16:11:58 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:59.878 16:11:58 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:59.878 16:11:58 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:59.878 16:11:58 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:59.878 16:11:58 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:59.878 16:11:58 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:59.878 16:11:58 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:59.878 16:11:58 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:59.878 16:11:58 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:59.878 16:11:58 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:59.878 16:11:58 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:59.878 16:11:58 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:59.878 16:11:58 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:59.878 16:11:58 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:12:59.878 16:11:58 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:59.878 16:11:58 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:59.878 16:11:58 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:59.878 16:11:58 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:59.878 16:11:58 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:59.878 16:11:58 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:59.878 16:11:58 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:59.878 16:11:58 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:12:59.878 16:11:58 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:12:59.878 16:11:58 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:59.878 16:11:58 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:12:59.878 16:11:58 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:12:59.878 16:11:58 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:12:59.878 16:11:58 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:12:59.878 16:11:58 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:12:59.878 16:11:58 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:12:59.878 16:11:58 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:12:59.878 16:11:58 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:12:59.878 16:11:58 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:12:59.878 16:11:58 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:12:59.878 16:11:58 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:12:59.878 16:11:58 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:12:59.878 16:11:58 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:12:59.878 16:11:58 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:12:59.878 16:11:58 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:12:59.878 16:11:58 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:12:59.878 16:11:58 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:59.878 16:11:58 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:12:59.878 16:11:58 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:12:59.878 16:11:58 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:12:59.878 16:11:58 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:12:59.878 16:11:58 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:12:59.878 16:11:58 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:12:59.878 16:11:58 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:12:59.878 16:11:58 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:12:59.878 16:11:58 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:12:59.878 16:11:58 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:12:59.878 16:11:58 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:59.878 16:11:58 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:12:59.878 16:11:58 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:12:59.878 16:11:58 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:12:59.878 16:11:58 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:12:59.878 16:11:58 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:12:59.878 16:11:58 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:12:59.878 16:11:58 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:12:59.878 16:11:58 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:59.878 16:11:58 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:12:59.878 16:11:58 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:59.878 16:11:58 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:59.878 16:11:58 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:59.878 16:11:58 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:59.878 16:11:58 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:59.878 16:11:58 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:59.878 16:11:58 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:59.878 16:11:58 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:12:59.878 16:11:58 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:59.878 #define SPDK_CONFIG_H 00:12:59.878 #define SPDK_CONFIG_APPS 1 00:12:59.878 #define SPDK_CONFIG_ARCH native 00:12:59.878 #define SPDK_CONFIG_ASAN 1 00:12:59.878 #undef SPDK_CONFIG_AVAHI 00:12:59.878 #undef SPDK_CONFIG_CET 00:12:59.878 #define SPDK_CONFIG_COVERAGE 1 00:12:59.878 #define SPDK_CONFIG_CROSS_PREFIX 00:12:59.878 #undef SPDK_CONFIG_CRYPTO 00:12:59.878 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:59.878 #undef SPDK_CONFIG_CUSTOMOCF 00:12:59.878 #undef SPDK_CONFIG_DAOS 00:12:59.878 #define SPDK_CONFIG_DAOS_DIR 00:12:59.878 #define SPDK_CONFIG_DEBUG 1 00:12:59.878 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:59.878 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:12:59.878 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:59.878 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:59.878 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:59.878 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:12:59.878 #define SPDK_CONFIG_EXAMPLES 1 00:12:59.878 #undef SPDK_CONFIG_FC 00:12:59.878 #define SPDK_CONFIG_FC_PATH 00:12:59.878 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:59.878 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:59.878 #undef SPDK_CONFIG_FUSE 00:12:59.878 #undef SPDK_CONFIG_FUZZER 00:12:59.878 #define SPDK_CONFIG_FUZZER_LIB 00:12:59.878 #undef SPDK_CONFIG_GOLANG 00:12:59.878 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:59.878 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:59.878 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:59.878 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:59.878 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:59.878 #define SPDK_CONFIG_IDXD 1 00:12:59.878 #undef SPDK_CONFIG_IDXD_KERNEL 00:12:59.878 #undef SPDK_CONFIG_IPSEC_MB 00:12:59.878 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:59.878 #define SPDK_CONFIG_ISAL 1 00:12:59.878 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:59.878 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:59.878 #define SPDK_CONFIG_LIBDIR 00:12:59.878 #undef SPDK_CONFIG_LTO 00:12:59.878 #define SPDK_CONFIG_MAX_LCORES 00:12:59.878 #define SPDK_CONFIG_NVME_CUSE 1 00:12:59.878 #undef SPDK_CONFIG_OCF 00:12:59.878 #define SPDK_CONFIG_OCF_PATH 00:12:59.878 #define SPDK_CONFIG_OPENSSL_PATH 00:12:59.878 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:59.878 #undef SPDK_CONFIG_PGO_USE 00:12:59.878 #define SPDK_CONFIG_PREFIX /usr/local 00:12:59.878 #undef SPDK_CONFIG_RAID5F 00:12:59.878 #undef SPDK_CONFIG_RBD 00:12:59.878 #define SPDK_CONFIG_RDMA 1 00:12:59.878 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:59.878 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:59.878 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:59.878 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:59.878 #define SPDK_CONFIG_SHARED 1 00:12:59.878 #undef SPDK_CONFIG_SMA 00:12:59.878 #define SPDK_CONFIG_TESTS 1 00:12:59.878 #undef SPDK_CONFIG_TSAN 00:12:59.878 #define SPDK_CONFIG_UBLK 1 00:12:59.878 #define SPDK_CONFIG_UBSAN 1 00:12:59.878 #undef SPDK_CONFIG_UNIT_TESTS 00:12:59.878 #undef SPDK_CONFIG_URING 00:12:59.878 #define SPDK_CONFIG_URING_PATH 00:12:59.878 #undef SPDK_CONFIG_URING_ZNS 00:12:59.878 #undef SPDK_CONFIG_USDT 00:12:59.878 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:59.878 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:59.878 #undef SPDK_CONFIG_VFIO_USER 00:12:59.878 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:59.878 #define SPDK_CONFIG_VHOST 1 00:12:59.879 #define SPDK_CONFIG_VIRTIO 1 00:12:59.879 #undef SPDK_CONFIG_VTUNE 00:12:59.879 #define SPDK_CONFIG_VTUNE_DIR 00:12:59.879 #define SPDK_CONFIG_WERROR 1 00:12:59.879 #define SPDK_CONFIG_WPDK_DIR 00:12:59.879 #undef SPDK_CONFIG_XNVME 00:12:59.879 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:59.879 16:11:58 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:59.879 16:11:58 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:59.879 16:11:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.879 16:11:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.879 16:11:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.879 16:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.879 16:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.879 16:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.879 16:11:58 -- paths/export.sh@5 -- # export PATH 00:12:59.879 16:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.879 16:11:58 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:12:59.879 16:11:58 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:12:59.879 16:11:58 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:12:59.879 16:11:58 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:12:59.879 16:11:58 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:59.879 16:11:58 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:12:59.879 16:11:58 -- pm/common@16 -- # TEST_TAG=N/A 00:12:59.879 16:11:58 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:12:59.879 16:11:58 -- common/autotest_common.sh@52 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:12:59.879 16:11:58 -- common/autotest_common.sh@56 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:59.879 16:11:58 -- common/autotest_common.sh@58 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:12:59.879 16:11:58 -- common/autotest_common.sh@60 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:59.879 16:11:58 -- common/autotest_common.sh@62 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:12:59.879 16:11:58 -- common/autotest_common.sh@64 -- # : 00:12:59.879 16:11:58 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:12:59.879 16:11:58 -- common/autotest_common.sh@66 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:12:59.879 16:11:58 -- common/autotest_common.sh@68 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:12:59.879 16:11:58 -- common/autotest_common.sh@70 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:12:59.879 16:11:58 -- common/autotest_common.sh@72 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:59.879 16:11:58 -- common/autotest_common.sh@74 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:12:59.879 16:11:58 -- common/autotest_common.sh@76 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:12:59.879 16:11:58 -- common/autotest_common.sh@78 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:12:59.879 16:11:58 -- common/autotest_common.sh@80 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:12:59.879 16:11:58 -- common/autotest_common.sh@82 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:12:59.879 16:11:58 -- common/autotest_common.sh@84 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:12:59.879 16:11:58 -- common/autotest_common.sh@86 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:12:59.879 16:11:58 -- common/autotest_common.sh@88 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:12:59.879 16:11:58 -- common/autotest_common.sh@90 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:59.879 16:11:58 -- common/autotest_common.sh@92 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:12:59.879 16:11:58 -- common/autotest_common.sh@94 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:12:59.879 16:11:58 -- common/autotest_common.sh@96 -- # : tcp 00:12:59.879 16:11:58 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:59.879 16:11:58 -- common/autotest_common.sh@98 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:12:59.879 16:11:58 -- common/autotest_common.sh@100 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:12:59.879 16:11:58 -- common/autotest_common.sh@102 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:12:59.879 16:11:58 -- common/autotest_common.sh@104 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:12:59.879 16:11:58 -- common/autotest_common.sh@106 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:12:59.879 16:11:58 -- common/autotest_common.sh@108 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:12:59.879 16:11:58 -- common/autotest_common.sh@110 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:12:59.879 16:11:58 -- common/autotest_common.sh@112 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:59.879 16:11:58 -- common/autotest_common.sh@114 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:12:59.879 16:11:58 -- common/autotest_common.sh@116 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:12:59.879 16:11:58 -- common/autotest_common.sh@118 -- # : 00:12:59.879 16:11:58 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:59.879 16:11:58 -- common/autotest_common.sh@120 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:12:59.879 16:11:58 -- common/autotest_common.sh@122 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:12:59.879 16:11:58 -- common/autotest_common.sh@124 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:12:59.879 16:11:58 -- common/autotest_common.sh@126 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:12:59.879 16:11:58 -- common/autotest_common.sh@128 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:12:59.879 16:11:58 -- common/autotest_common.sh@130 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:12:59.879 16:11:58 -- common/autotest_common.sh@132 -- # : 00:12:59.879 16:11:58 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:12:59.879 16:11:58 -- common/autotest_common.sh@134 -- # : true 00:12:59.879 16:11:58 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:12:59.879 16:11:58 -- common/autotest_common.sh@136 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:12:59.879 16:11:58 -- common/autotest_common.sh@138 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:12:59.879 16:11:58 -- common/autotest_common.sh@140 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:12:59.879 16:11:58 -- common/autotest_common.sh@142 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:12:59.879 16:11:58 -- common/autotest_common.sh@144 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:12:59.879 16:11:58 -- common/autotest_common.sh@146 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:12:59.879 16:11:58 -- common/autotest_common.sh@148 -- # : 00:12:59.879 16:11:58 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:12:59.879 16:11:58 -- common/autotest_common.sh@150 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:12:59.879 16:11:58 -- common/autotest_common.sh@152 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:12:59.879 16:11:58 -- common/autotest_common.sh@154 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:12:59.879 16:11:58 -- common/autotest_common.sh@156 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:12:59.879 16:11:58 -- common/autotest_common.sh@158 -- # : 1 00:12:59.879 16:11:58 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:12:59.879 16:11:58 -- common/autotest_common.sh@160 -- # : 0 00:12:59.879 16:11:58 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:12:59.879 16:11:58 -- common/autotest_common.sh@163 -- # : 00:12:59.879 16:11:58 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:12:59.879 16:11:58 -- common/autotest_common.sh@165 -- # : 0 00:12:59.880 16:11:58 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:12:59.880 16:11:58 -- common/autotest_common.sh@167 -- # : 0 00:12:59.880 16:11:58 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:59.880 16:11:58 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:59.880 16:11:58 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:59.880 16:11:58 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:12:59.880 16:11:58 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:59.880 16:11:58 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:12:59.880 16:11:58 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:59.880 16:11:58 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:59.880 16:11:58 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:59.880 16:11:58 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:59.880 16:11:58 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:59.880 16:11:58 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:12:59.880 16:11:58 -- common/autotest_common.sh@196 -- # cat 00:12:59.880 16:11:58 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:12:59.880 16:11:58 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:59.880 16:11:58 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:59.880 16:11:58 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:59.880 16:11:58 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:59.880 16:11:58 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:12:59.880 16:11:58 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:12:59.880 16:11:58 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:59.880 16:11:58 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:12:59.880 16:11:58 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:59.880 16:11:58 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:12:59.880 16:11:58 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:59.880 16:11:58 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:59.880 16:11:58 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:59.880 16:11:58 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:59.880 16:11:58 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:59.880 16:11:58 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:59.880 16:11:58 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:12:59.880 16:11:58 -- common/autotest_common.sh@249 -- # export valgrind= 00:12:59.880 16:11:58 -- common/autotest_common.sh@249 -- # valgrind= 00:12:59.880 16:11:58 -- common/autotest_common.sh@255 -- # uname -s 00:12:59.880 16:11:58 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:12:59.880 16:11:58 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:12:59.880 16:11:58 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:12:59.880 16:11:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@265 -- # MAKE=make 00:12:59.880 16:11:58 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j128 00:12:59.880 16:11:58 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:12:59.880 16:11:58 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:12:59.880 16:11:58 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:12:59.880 16:11:58 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:12:59.880 16:11:58 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:12:59.880 16:11:58 -- common/autotest_common.sh@291 -- # for i in "$@" 00:12:59.880 16:11:58 -- common/autotest_common.sh@292 -- # case "$i" in 00:12:59.880 16:11:58 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:12:59.880 16:11:58 -- common/autotest_common.sh@309 -- # [[ -z 2976817 ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@309 -- # kill -0 2976817 00:12:59.880 16:11:58 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:12:59.880 16:11:58 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:12:59.880 16:11:58 -- common/autotest_common.sh@322 -- # local mount target_dir 00:12:59.880 16:11:58 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:12:59.880 16:11:58 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:12:59.880 16:11:58 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:12:59.880 16:11:58 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:12:59.880 16:11:58 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.SlPLcg 00:12:59.880 16:11:58 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:59.880 16:11:58 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:12:59.880 16:11:58 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SlPLcg/tests/target /tmp/spdk.SlPLcg 00:12:59.880 16:11:58 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:12:59.880 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.880 16:11:58 -- common/autotest_common.sh@318 -- # df -T 00:12:59.880 16:11:58 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:12:59.880 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:12:59.880 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=991178752 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:12:59.880 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4293251072 00:12:59.880 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=121060519936 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129472499712 00:12:59.880 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=8411979776 00:12:59.880 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=64733655040 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64736247808 00:12:59.880 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:12:59.880 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=25884811264 00:12:59.880 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25894502400 00:12:59.881 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=9691136 00:12:59.881 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=66560 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:12:59.881 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=437248 00:12:59.881 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=64734900224 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64736251904 00:12:59.881 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=1351680 00:12:59.881 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=12947243008 00:12:59.881 16:11:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12947247104 00:12:59.881 16:11:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:12:59.881 16:11:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:12:59.881 16:11:58 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:12:59.881 * Looking for test storage... 00:12:59.881 16:11:58 -- common/autotest_common.sh@359 -- # local target_space new_size 00:12:59.881 16:11:58 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:12:59.881 16:11:58 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.881 16:11:58 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:59.881 16:11:58 -- common/autotest_common.sh@363 -- # mount=/ 00:12:59.881 16:11:58 -- common/autotest_common.sh@365 -- # target_space=121060519936 00:12:59.881 16:11:58 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:12:59.881 16:11:58 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:12:59.881 16:11:58 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:12:59.881 16:11:58 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:12:59.881 16:11:58 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:12:59.881 16:11:58 -- common/autotest_common.sh@372 -- # new_size=10626572288 00:12:59.881 16:11:58 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:59.881 16:11:58 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.881 16:11:58 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.881 16:11:58 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.881 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:59.881 16:11:58 -- common/autotest_common.sh@380 -- # return 0 00:12:59.881 16:11:58 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:12:59.881 16:11:58 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:12:59.881 16:11:58 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:59.881 16:11:58 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:59.881 16:11:58 -- common/autotest_common.sh@1672 -- # true 00:12:59.881 16:11:58 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:12:59.881 16:11:58 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:59.881 16:11:58 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:59.881 16:11:58 -- common/autotest_common.sh@27 -- # exec 00:12:59.881 16:11:58 -- common/autotest_common.sh@29 -- # exec 00:12:59.881 16:11:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:59.881 16:11:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:59.881 16:11:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:59.881 16:11:58 -- common/autotest_common.sh@18 -- # set -x 00:12:59.881 16:11:58 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.881 16:11:58 -- nvmf/common.sh@7 -- # uname -s 00:12:59.881 16:11:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.881 16:11:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.881 16:11:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.881 16:11:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.881 16:11:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.881 16:11:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.881 16:11:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.881 16:11:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.881 16:11:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.881 16:11:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.881 16:11:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:59.881 16:11:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:12:59.881 16:11:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.881 16:11:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.881 16:11:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:59.881 16:11:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:59.881 16:11:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.881 16:11:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.881 16:11:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.881 16:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.881 16:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.881 16:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.881 16:11:58 -- paths/export.sh@5 -- # export PATH 00:12:59.881 16:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.881 16:11:58 -- nvmf/common.sh@46 -- # : 0 00:12:59.881 16:11:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.881 16:11:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.881 16:11:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.881 16:11:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.881 16:11:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.881 16:11:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.881 16:11:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.881 16:11:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.881 16:11:58 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:59.881 16:11:58 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:59.881 16:11:58 -- target/filesystem.sh@15 -- # nvmftestinit 00:12:59.881 16:11:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.881 16:11:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.881 16:11:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.881 16:11:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.881 16:11:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.881 16:11:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.881 16:11:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.881 16:11:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.881 16:11:58 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:12:59.881 16:11:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:59.881 16:11:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:59.881 16:11:58 -- common/autotest_common.sh@10 -- # set +x 00:13:06.469 16:12:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.469 16:12:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:06.469 16:12:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:06.469 16:12:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:06.469 16:12:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:06.469 16:12:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:06.469 16:12:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:06.469 16:12:04 -- nvmf/common.sh@294 -- # net_devs=() 00:13:06.469 16:12:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:06.469 16:12:04 -- nvmf/common.sh@295 -- # e810=() 00:13:06.469 16:12:04 -- nvmf/common.sh@295 -- # local -ga e810 00:13:06.469 16:12:04 -- nvmf/common.sh@296 -- # x722=() 00:13:06.469 16:12:04 -- nvmf/common.sh@296 -- # local -ga x722 00:13:06.469 16:12:04 -- nvmf/common.sh@297 -- # mlx=() 00:13:06.469 16:12:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:06.469 16:12:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.469 16:12:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:06.469 16:12:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.469 16:12:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:06.469 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:06.469 16:12:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.469 16:12:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:06.469 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:06.469 16:12:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.469 16:12:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.469 16:12:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.469 16:12:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:06.469 Found net devices under 0000:27:00.0: cvl_0_0 00:13:06.469 16:12:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.469 16:12:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.469 16:12:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.469 16:12:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.469 16:12:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:06.469 Found net devices under 0000:27:00.1: cvl_0_1 00:13:06.469 16:12:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.469 16:12:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:06.469 16:12:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:06.469 16:12:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:06.469 16:12:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.469 16:12:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.469 16:12:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.469 16:12:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:06.469 16:12:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.469 16:12:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.469 16:12:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:06.469 16:12:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.469 16:12:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.469 16:12:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:06.469 16:12:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:06.469 16:12:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.469 16:12:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.469 16:12:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.469 16:12:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.470 16:12:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:06.470 16:12:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.470 16:12:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.470 16:12:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.470 16:12:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:06.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:13:06.470 00:13:06.470 --- 10.0.0.2 ping statistics --- 00:13:06.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.470 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:13:06.470 16:12:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:06.470 00:13:06.470 --- 10.0.0.1 ping statistics --- 00:13:06.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.470 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:06.470 16:12:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.470 16:12:05 -- nvmf/common.sh@410 -- # return 0 00:13:06.470 16:12:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:06.470 16:12:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.470 16:12:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:06.470 16:12:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:06.470 16:12:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.470 16:12:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:06.470 16:12:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:06.470 16:12:05 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:06.470 16:12:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:06.470 16:12:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.470 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.470 ************************************ 00:13:06.470 START TEST nvmf_filesystem_no_in_capsule 00:13:06.470 ************************************ 00:13:06.470 16:12:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:13:06.470 16:12:05 -- target/filesystem.sh@47 -- # in_capsule=0 00:13:06.470 16:12:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:06.470 16:12:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:06.470 16:12:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:06.470 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.470 16:12:05 -- nvmf/common.sh@469 -- # nvmfpid=2980629 00:13:06.470 16:12:05 -- nvmf/common.sh@470 -- # waitforlisten 2980629 00:13:06.470 16:12:05 -- common/autotest_common.sh@819 -- # '[' -z 2980629 ']' 00:13:06.470 16:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.470 16:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.470 16:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.470 16:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.470 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:13:06.470 16:12:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.470 [2024-04-23 16:12:05.241872] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:13:06.470 [2024-04-23 16:12:05.242012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.470 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.470 [2024-04-23 16:12:05.386743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.729 [2024-04-23 16:12:05.486826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.729 [2024-04-23 16:12:05.487027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.729 [2024-04-23 16:12:05.487042] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.729 [2024-04-23 16:12:05.487053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.729 [2024-04-23 16:12:05.487122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.729 [2024-04-23 16:12:05.487235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.729 [2024-04-23 16:12:05.487257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.729 [2024-04-23 16:12:05.487271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.302 16:12:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.302 16:12:05 -- common/autotest_common.sh@852 -- # return 0 00:13:07.302 16:12:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:07.302 16:12:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:07.302 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.302 16:12:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.302 16:12:05 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:07.302 16:12:05 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:07.302 16:12:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.302 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:13:07.302 [2024-04-23 16:12:06.000976] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.302 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.302 16:12:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:07.302 16:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.302 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:07.561 Malloc1 00:13:07.561 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.561 16:12:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.561 16:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.561 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:07.561 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.561 16:12:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.561 16:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.561 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:07.561 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.561 16:12:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.561 16:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.561 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:07.561 [2024-04-23 16:12:06.261040] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.561 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.561 16:12:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:07.561 16:12:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:07.561 16:12:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:07.561 16:12:06 -- common/autotest_common.sh@1359 -- # local bs 00:13:07.561 16:12:06 -- common/autotest_common.sh@1360 -- # local nb 00:13:07.561 16:12:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:07.561 16:12:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.561 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:13:07.561 16:12:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.561 16:12:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:07.561 { 00:13:07.561 "name": "Malloc1", 00:13:07.562 "aliases": [ 00:13:07.562 "840e4b78-0cb5-459b-95ce-acba4707ddaa" 00:13:07.562 ], 00:13:07.562 "product_name": "Malloc disk", 00:13:07.562 "block_size": 512, 00:13:07.562 "num_blocks": 1048576, 00:13:07.562 "uuid": "840e4b78-0cb5-459b-95ce-acba4707ddaa", 00:13:07.562 "assigned_rate_limits": { 00:13:07.562 "rw_ios_per_sec": 0, 00:13:07.562 "rw_mbytes_per_sec": 0, 00:13:07.562 "r_mbytes_per_sec": 0, 00:13:07.562 "w_mbytes_per_sec": 0 00:13:07.562 }, 00:13:07.562 "claimed": true, 00:13:07.562 "claim_type": "exclusive_write", 00:13:07.562 "zoned": false, 00:13:07.562 "supported_io_types": { 00:13:07.562 "read": true, 00:13:07.562 "write": true, 00:13:07.562 "unmap": true, 00:13:07.562 "write_zeroes": true, 00:13:07.562 "flush": true, 00:13:07.562 "reset": true, 00:13:07.562 "compare": false, 00:13:07.562 "compare_and_write": false, 00:13:07.562 "abort": true, 00:13:07.562 "nvme_admin": false, 00:13:07.562 "nvme_io": false 00:13:07.562 }, 00:13:07.562 "memory_domains": [ 00:13:07.562 { 00:13:07.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.562 "dma_device_type": 2 00:13:07.562 } 00:13:07.562 ], 00:13:07.562 "driver_specific": {} 00:13:07.562 } 00:13:07.562 ]' 00:13:07.562 16:12:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:07.562 16:12:06 -- common/autotest_common.sh@1362 -- # bs=512 00:13:07.562 16:12:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:07.562 16:12:06 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:07.562 16:12:06 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:07.562 16:12:06 -- common/autotest_common.sh@1367 -- # echo 512 00:13:07.562 16:12:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:07.562 16:12:06 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.946 16:12:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.946 16:12:07 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.946 16:12:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.946 16:12:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.946 16:12:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:11.495 16:12:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:11.495 16:12:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.495 16:12:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:11.495 16:12:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:11.495 16:12:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.495 16:12:09 -- common/autotest_common.sh@1187 -- # return 0 00:13:11.495 16:12:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:11.495 16:12:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:11.495 16:12:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:11.495 16:12:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:11.495 16:12:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:11.495 16:12:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:11.495 16:12:09 -- setup/common.sh@80 -- # echo 536870912 00:13:11.495 16:12:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:11.495 16:12:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:11.495 16:12:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:11.495 16:12:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:11.495 16:12:10 -- target/filesystem.sh@69 -- # partprobe 00:13:12.068 16:12:10 -- target/filesystem.sh@70 -- # sleep 1 00:13:13.011 16:12:11 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:13.011 16:12:11 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:13.011 16:12:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:13.011 16:12:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.011 16:12:11 -- common/autotest_common.sh@10 -- # set +x 00:13:13.011 ************************************ 00:13:13.011 START TEST filesystem_ext4 00:13:13.011 ************************************ 00:13:13.011 16:12:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:13.011 16:12:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:13.011 16:12:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:13.011 16:12:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:13.011 16:12:11 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:13.011 16:12:11 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:13.011 16:12:11 -- common/autotest_common.sh@904 -- # local i=0 00:13:13.011 16:12:11 -- common/autotest_common.sh@905 -- # local force 00:13:13.011 16:12:11 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:13.011 16:12:11 -- common/autotest_common.sh@908 -- # force=-F 00:13:13.011 16:12:11 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:13.011 mke2fs 1.46.5 (30-Dec-2021) 00:13:13.273 Discarding device blocks: 0/522240 done 00:13:13.273 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:13.273 Filesystem UUID: ac92fdcd-c151-4316-881b-657eb3f7ad83 00:13:13.273 Superblock backups stored on blocks: 00:13:13.273 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:13.273 00:13:13.273 Allocating group tables: 0/64 done 00:13:13.273 Writing inode tables: 0/64 done 00:13:13.273 Creating journal (8192 blocks): done 00:13:13.273 Writing superblocks and filesystem accounting information: 0/64 done 00:13:13.273 00:13:13.273 16:12:12 -- common/autotest_common.sh@921 -- # return 0 00:13:13.273 16:12:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:13.535 16:12:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:13.535 16:12:12 -- target/filesystem.sh@25 -- # sync 00:13:13.535 16:12:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:13.535 16:12:12 -- target/filesystem.sh@27 -- # sync 00:13:13.535 16:12:12 -- target/filesystem.sh@29 -- # i=0 00:13:13.535 16:12:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:13.535 16:12:12 -- target/filesystem.sh@37 -- # kill -0 2980629 00:13:13.535 16:12:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:13.535 16:12:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:13.535 16:12:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:13.535 16:12:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:13.535 00:13:13.535 real 0m0.526s 00:13:13.535 user 0m0.026s 00:13:13.535 sys 0m0.041s 00:13:13.535 16:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.535 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.535 ************************************ 00:13:13.535 END TEST filesystem_ext4 00:13:13.535 ************************************ 00:13:13.797 16:12:12 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:13.797 16:12:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:13.797 16:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.797 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.797 ************************************ 00:13:13.797 START TEST filesystem_btrfs 00:13:13.797 ************************************ 00:13:13.797 16:12:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:13.797 16:12:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:13.797 16:12:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:13.797 16:12:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:13.797 16:12:12 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:13.797 16:12:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:13.797 16:12:12 -- common/autotest_common.sh@904 -- # local i=0 00:13:13.797 16:12:12 -- common/autotest_common.sh@905 -- # local force 00:13:13.797 16:12:12 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:13.797 16:12:12 -- common/autotest_common.sh@910 -- # force=-f 00:13:13.797 16:12:12 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:13.797 btrfs-progs v6.6.2 00:13:13.797 See https://btrfs.readthedocs.io for more information. 00:13:13.797 00:13:13.797 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:13.797 NOTE: several default settings have changed in version 5.15, please make sure 00:13:13.797 this does not affect your deployments: 00:13:13.797 - DUP for metadata (-m dup) 00:13:13.797 - enabled no-holes (-O no-holes) 00:13:13.797 - enabled free-space-tree (-R free-space-tree) 00:13:13.797 00:13:13.797 Label: (null) 00:13:13.797 UUID: d12c5c38-c66e-4e1d-868b-6d1795e6ff5e 00:13:13.797 Node size: 16384 00:13:13.797 Sector size: 4096 00:13:13.797 Filesystem size: 510.00MiB 00:13:13.797 Block group profiles: 00:13:13.797 Data: single 8.00MiB 00:13:13.797 Metadata: DUP 32.00MiB 00:13:13.797 System: DUP 8.00MiB 00:13:13.797 SSD detected: yes 00:13:13.797 Zoned device: no 00:13:13.797 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:13.797 Runtime features: free-space-tree 00:13:13.797 Checksum: crc32c 00:13:13.797 Number of devices: 1 00:13:13.797 Devices: 00:13:13.797 ID SIZE PATH 00:13:13.797 1 510.00MiB /dev/nvme0n1p1 00:13:13.797 00:13:13.797 16:12:12 -- common/autotest_common.sh@921 -- # return 0 00:13:13.797 16:12:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:14.739 16:12:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:14.739 16:12:13 -- target/filesystem.sh@25 -- # sync 00:13:14.739 16:12:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:14.739 16:12:13 -- target/filesystem.sh@27 -- # sync 00:13:14.739 16:12:13 -- target/filesystem.sh@29 -- # i=0 00:13:14.739 16:12:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:14.739 16:12:13 -- target/filesystem.sh@37 -- # kill -0 2980629 00:13:14.739 16:12:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:14.739 16:12:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:14.739 16:12:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:14.739 16:12:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:14.739 00:13:14.739 real 0m1.150s 00:13:14.739 user 0m0.023s 00:13:14.739 sys 0m0.063s 00:13:14.739 16:12:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.739 16:12:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.739 ************************************ 00:13:14.739 END TEST filesystem_btrfs 00:13:14.739 ************************************ 00:13:15.000 16:12:13 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:15.000 16:12:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:15.000 16:12:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.000 16:12:13 -- common/autotest_common.sh@10 -- # set +x 00:13:15.000 ************************************ 00:13:15.000 START TEST filesystem_xfs 00:13:15.000 ************************************ 00:13:15.000 16:12:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:15.000 16:12:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:15.000 16:12:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:15.000 16:12:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:15.000 16:12:13 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:15.000 16:12:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:15.000 16:12:13 -- common/autotest_common.sh@904 -- # local i=0 00:13:15.000 16:12:13 -- common/autotest_common.sh@905 -- # local force 00:13:15.000 16:12:13 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:15.000 16:12:13 -- common/autotest_common.sh@910 -- # force=-f 00:13:15.000 16:12:13 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:15.000 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:15.000 = sectsz=512 attr=2, projid32bit=1 00:13:15.000 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:15.000 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:15.000 data = bsize=4096 blocks=130560, imaxpct=25 00:13:15.000 = sunit=0 swidth=0 blks 00:13:15.000 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:15.001 log =internal log bsize=4096 blocks=16384, version=2 00:13:15.001 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:15.001 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:15.944 Discarding blocks...Done. 00:13:15.944 16:12:14 -- common/autotest_common.sh@921 -- # return 0 00:13:15.944 16:12:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.856 16:12:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.856 16:12:16 -- target/filesystem.sh@25 -- # sync 00:13:17.856 16:12:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.856 16:12:16 -- target/filesystem.sh@27 -- # sync 00:13:17.856 16:12:16 -- target/filesystem.sh@29 -- # i=0 00:13:17.857 16:12:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.857 16:12:16 -- target/filesystem.sh@37 -- # kill -0 2980629 00:13:17.857 16:12:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.857 16:12:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.857 16:12:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.857 16:12:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.857 00:13:17.857 real 0m3.072s 00:13:17.857 user 0m0.018s 00:13:17.857 sys 0m0.056s 00:13:17.857 16:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.857 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:13:17.857 ************************************ 00:13:17.857 END TEST filesystem_xfs 00:13:17.857 ************************************ 00:13:18.117 16:12:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:18.377 16:12:17 -- target/filesystem.sh@93 -- # sync 00:13:18.377 16:12:17 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.377 16:12:17 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.377 16:12:17 -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.377 16:12:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:18.377 16:12:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.377 16:12:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:18.377 16:12:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.377 16:12:17 -- common/autotest_common.sh@1210 -- # return 0 00:13:18.377 16:12:17 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.377 16:12:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.377 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.377 16:12:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.377 16:12:17 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:18.377 16:12:17 -- target/filesystem.sh@101 -- # killprocess 2980629 00:13:18.377 16:12:17 -- common/autotest_common.sh@926 -- # '[' -z 2980629 ']' 00:13:18.377 16:12:17 -- common/autotest_common.sh@930 -- # kill -0 2980629 00:13:18.377 16:12:17 -- common/autotest_common.sh@931 -- # uname 00:13:18.377 16:12:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.377 16:12:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2980629 00:13:18.377 16:12:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.377 16:12:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.377 16:12:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2980629' 00:13:18.377 killing process with pid 2980629 00:13:18.377 16:12:17 -- common/autotest_common.sh@945 -- # kill 2980629 00:13:18.377 16:12:17 -- common/autotest_common.sh@950 -- # wait 2980629 00:13:19.319 16:12:18 -- target/filesystem.sh@102 -- # nvmfpid= 00:13:19.319 00:13:19.319 real 0m13.049s 00:13:19.319 user 0m50.203s 00:13:19.319 sys 0m1.042s 00:13:19.319 16:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.319 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.319 ************************************ 00:13:19.319 END TEST nvmf_filesystem_no_in_capsule 00:13:19.319 ************************************ 00:13:19.319 16:12:18 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:19.319 16:12:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:19.319 16:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.319 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.319 ************************************ 00:13:19.319 START TEST nvmf_filesystem_in_capsule 00:13:19.319 ************************************ 00:13:19.319 16:12:18 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:13:19.319 16:12:18 -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:19.319 16:12:18 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:19.319 16:12:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:19.319 16:12:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:19.319 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.319 16:12:18 -- nvmf/common.sh@469 -- # nvmfpid=2983655 00:13:19.319 16:12:18 -- nvmf/common.sh@470 -- # waitforlisten 2983655 00:13:19.319 16:12:18 -- common/autotest_common.sh@819 -- # '[' -z 2983655 ']' 00:13:19.319 16:12:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.319 16:12:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:19.319 16:12:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.319 16:12:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:19.319 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.319 16:12:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.578 [2024-04-23 16:12:18.309117] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:13:19.578 [2024-04-23 16:12:18.309226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.578 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.578 [2024-04-23 16:12:18.438195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.837 [2024-04-23 16:12:18.535830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.837 [2024-04-23 16:12:18.536019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.837 [2024-04-23 16:12:18.536033] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.837 [2024-04-23 16:12:18.536042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.838 [2024-04-23 16:12:18.536101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.838 [2024-04-23 16:12:18.536208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.838 [2024-04-23 16:12:18.536346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.838 [2024-04-23 16:12:18.536356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.099 16:12:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.099 16:12:19 -- common/autotest_common.sh@852 -- # return 0 00:13:20.099 16:12:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:20.099 16:12:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:20.099 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.360 16:12:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.360 16:12:19 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:20.360 16:12:19 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:20.360 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.360 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.360 [2024-04-23 16:12:19.063024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.360 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.360 16:12:19 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:20.360 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.360 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 Malloc1 00:13:20.622 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.622 16:12:19 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.622 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.622 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.622 16:12:19 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.622 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.622 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.622 16:12:19 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.622 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.622 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 [2024-04-23 16:12:19.326026] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.622 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.622 16:12:19 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:20.622 16:12:19 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:13:20.622 16:12:19 -- common/autotest_common.sh@1358 -- # local bdev_info 00:13:20.622 16:12:19 -- common/autotest_common.sh@1359 -- # local bs 00:13:20.622 16:12:19 -- common/autotest_common.sh@1360 -- # local nb 00:13:20.622 16:12:19 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:20.622 16:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.622 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.622 16:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.622 16:12:19 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:13:20.622 { 00:13:20.622 "name": "Malloc1", 00:13:20.622 "aliases": [ 00:13:20.622 "6b4a347a-19bf-4d3a-857d-f56cbebaf89b" 00:13:20.622 ], 00:13:20.622 "product_name": "Malloc disk", 00:13:20.622 "block_size": 512, 00:13:20.622 "num_blocks": 1048576, 00:13:20.622 "uuid": "6b4a347a-19bf-4d3a-857d-f56cbebaf89b", 00:13:20.622 "assigned_rate_limits": { 00:13:20.622 "rw_ios_per_sec": 0, 00:13:20.622 "rw_mbytes_per_sec": 0, 00:13:20.622 "r_mbytes_per_sec": 0, 00:13:20.622 "w_mbytes_per_sec": 0 00:13:20.622 }, 00:13:20.622 "claimed": true, 00:13:20.622 "claim_type": "exclusive_write", 00:13:20.622 "zoned": false, 00:13:20.622 "supported_io_types": { 00:13:20.622 "read": true, 00:13:20.622 "write": true, 00:13:20.622 "unmap": true, 00:13:20.622 "write_zeroes": true, 00:13:20.622 "flush": true, 00:13:20.622 "reset": true, 00:13:20.622 "compare": false, 00:13:20.622 "compare_and_write": false, 00:13:20.622 "abort": true, 00:13:20.622 "nvme_admin": false, 00:13:20.622 "nvme_io": false 00:13:20.622 }, 00:13:20.622 "memory_domains": [ 00:13:20.622 { 00:13:20.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.622 "dma_device_type": 2 00:13:20.622 } 00:13:20.622 ], 00:13:20.622 "driver_specific": {} 00:13:20.622 } 00:13:20.622 ]' 00:13:20.622 16:12:19 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:13:20.622 16:12:19 -- common/autotest_common.sh@1362 -- # bs=512 00:13:20.622 16:12:19 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:13:20.622 16:12:19 -- common/autotest_common.sh@1363 -- # nb=1048576 00:13:20.622 16:12:19 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:13:20.622 16:12:19 -- common/autotest_common.sh@1367 -- # echo 512 00:13:20.622 16:12:19 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:20.622 16:12:19 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.999 16:12:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.999 16:12:20 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.999 16:12:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.999 16:12:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.999 16:12:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:23.923 16:12:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:24.182 16:12:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:24.182 16:12:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.182 16:12:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:24.182 16:12:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.182 16:12:22 -- common/autotest_common.sh@1187 -- # return 0 00:13:24.182 16:12:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:24.182 16:12:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:24.182 16:12:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:24.182 16:12:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:24.182 16:12:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:24.182 16:12:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:24.182 16:12:22 -- setup/common.sh@80 -- # echo 536870912 00:13:24.182 16:12:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:24.182 16:12:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:24.182 16:12:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:24.182 16:12:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:24.182 16:12:23 -- target/filesystem.sh@69 -- # partprobe 00:13:24.440 16:12:23 -- target/filesystem.sh@70 -- # sleep 1 00:13:25.385 16:12:24 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:25.385 16:12:24 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:25.385 16:12:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:25.385 16:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.385 16:12:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.385 ************************************ 00:13:25.385 START TEST filesystem_in_capsule_ext4 00:13:25.385 ************************************ 00:13:25.385 16:12:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:25.385 16:12:24 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:25.385 16:12:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:25.385 16:12:24 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:25.385 16:12:24 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:13:25.385 16:12:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:25.385 16:12:24 -- common/autotest_common.sh@904 -- # local i=0 00:13:25.385 16:12:24 -- common/autotest_common.sh@905 -- # local force 00:13:25.385 16:12:24 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:13:25.385 16:12:24 -- common/autotest_common.sh@908 -- # force=-F 00:13:25.385 16:12:24 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:25.385 mke2fs 1.46.5 (30-Dec-2021) 00:13:25.385 Discarding device blocks: 0/522240 done 00:13:25.385 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:25.385 Filesystem UUID: 104a948f-ed91-4df7-8c48-a2fbd44d3c4d 00:13:25.385 Superblock backups stored on blocks: 00:13:25.385 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:25.385 00:13:25.385 Allocating group tables: 0/64 done 00:13:25.385 Writing inode tables: 0/64 done 00:13:25.647 Creating journal (8192 blocks): done 00:13:25.647 Writing superblocks and filesystem accounting information: 0/64 done 00:13:25.647 00:13:25.647 16:12:24 -- common/autotest_common.sh@921 -- # return 0 00:13:25.647 16:12:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:25.905 16:12:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:25.905 16:12:24 -- target/filesystem.sh@25 -- # sync 00:13:25.905 16:12:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:25.905 16:12:24 -- target/filesystem.sh@27 -- # sync 00:13:25.905 16:12:24 -- target/filesystem.sh@29 -- # i=0 00:13:25.905 16:12:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:25.905 16:12:24 -- target/filesystem.sh@37 -- # kill -0 2983655 00:13:25.905 16:12:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:25.905 16:12:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:25.905 16:12:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:25.905 16:12:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:25.905 00:13:25.905 real 0m0.531s 00:13:25.905 user 0m0.017s 00:13:25.905 sys 0m0.040s 00:13:25.905 16:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.905 16:12:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.905 ************************************ 00:13:25.905 END TEST filesystem_in_capsule_ext4 00:13:25.905 ************************************ 00:13:25.905 16:12:24 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:25.905 16:12:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:25.905 16:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.905 16:12:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.905 ************************************ 00:13:25.905 START TEST filesystem_in_capsule_btrfs 00:13:25.905 ************************************ 00:13:25.905 16:12:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:25.905 16:12:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:25.905 16:12:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:25.905 16:12:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:25.905 16:12:24 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:13:25.905 16:12:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:25.905 16:12:24 -- common/autotest_common.sh@904 -- # local i=0 00:13:25.905 16:12:24 -- common/autotest_common.sh@905 -- # local force 00:13:25.905 16:12:24 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:13:25.905 16:12:24 -- common/autotest_common.sh@910 -- # force=-f 00:13:25.905 16:12:24 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:26.476 btrfs-progs v6.6.2 00:13:26.476 See https://btrfs.readthedocs.io for more information. 00:13:26.476 00:13:26.476 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:26.476 NOTE: several default settings have changed in version 5.15, please make sure 00:13:26.476 this does not affect your deployments: 00:13:26.476 - DUP for metadata (-m dup) 00:13:26.476 - enabled no-holes (-O no-holes) 00:13:26.476 - enabled free-space-tree (-R free-space-tree) 00:13:26.476 00:13:26.476 Label: (null) 00:13:26.476 UUID: 312dfdfe-e3bd-465f-86e5-1491ec24b3eb 00:13:26.476 Node size: 16384 00:13:26.476 Sector size: 4096 00:13:26.476 Filesystem size: 510.00MiB 00:13:26.476 Block group profiles: 00:13:26.476 Data: single 8.00MiB 00:13:26.476 Metadata: DUP 32.00MiB 00:13:26.476 System: DUP 8.00MiB 00:13:26.476 SSD detected: yes 00:13:26.476 Zoned device: no 00:13:26.476 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:26.476 Runtime features: free-space-tree 00:13:26.476 Checksum: crc32c 00:13:26.476 Number of devices: 1 00:13:26.476 Devices: 00:13:26.476 ID SIZE PATH 00:13:26.476 1 510.00MiB /dev/nvme0n1p1 00:13:26.476 00:13:26.476 16:12:25 -- common/autotest_common.sh@921 -- # return 0 00:13:26.476 16:12:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:27.046 16:12:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:27.046 16:12:25 -- target/filesystem.sh@25 -- # sync 00:13:27.046 16:12:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:27.046 16:12:25 -- target/filesystem.sh@27 -- # sync 00:13:27.046 16:12:25 -- target/filesystem.sh@29 -- # i=0 00:13:27.046 16:12:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:27.046 16:12:25 -- target/filesystem.sh@37 -- # kill -0 2983655 00:13:27.046 16:12:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:27.046 16:12:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:27.046 16:12:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:27.046 16:12:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:27.046 00:13:27.046 real 0m1.227s 00:13:27.046 user 0m0.017s 00:13:27.046 sys 0m0.062s 00:13:27.046 16:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.046 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.046 ************************************ 00:13:27.046 END TEST filesystem_in_capsule_btrfs 00:13:27.046 ************************************ 00:13:27.304 16:12:25 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:27.304 16:12:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:27.304 16:12:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.304 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:13:27.304 ************************************ 00:13:27.304 START TEST filesystem_in_capsule_xfs 00:13:27.304 ************************************ 00:13:27.304 16:12:26 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:13:27.304 16:12:26 -- target/filesystem.sh@18 -- # fstype=xfs 00:13:27.304 16:12:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:27.304 16:12:26 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:27.304 16:12:26 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:13:27.305 16:12:26 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:13:27.305 16:12:26 -- common/autotest_common.sh@904 -- # local i=0 00:13:27.305 16:12:26 -- common/autotest_common.sh@905 -- # local force 00:13:27.305 16:12:26 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:13:27.305 16:12:26 -- common/autotest_common.sh@910 -- # force=-f 00:13:27.305 16:12:26 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:27.305 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:27.305 = sectsz=512 attr=2, projid32bit=1 00:13:27.305 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:27.305 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:27.305 data = bsize=4096 blocks=130560, imaxpct=25 00:13:27.305 = sunit=0 swidth=0 blks 00:13:27.305 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:27.305 log =internal log bsize=4096 blocks=16384, version=2 00:13:27.305 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:27.305 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:28.244 Discarding blocks...Done. 00:13:28.244 16:12:26 -- common/autotest_common.sh@921 -- # return 0 00:13:28.244 16:12:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.161 16:12:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.161 16:12:28 -- target/filesystem.sh@25 -- # sync 00:13:30.161 16:12:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.161 16:12:28 -- target/filesystem.sh@27 -- # sync 00:13:30.161 16:12:28 -- target/filesystem.sh@29 -- # i=0 00:13:30.161 16:12:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.161 16:12:28 -- target/filesystem.sh@37 -- # kill -0 2983655 00:13:30.161 16:12:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.161 16:12:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.161 16:12:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.161 16:12:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.161 00:13:30.161 real 0m2.675s 00:13:30.161 user 0m0.015s 00:13:30.161 sys 0m0.056s 00:13:30.161 16:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.161 16:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:30.161 ************************************ 00:13:30.161 END TEST filesystem_in_capsule_xfs 00:13:30.161 ************************************ 00:13:30.161 16:12:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:30.161 16:12:28 -- target/filesystem.sh@93 -- # sync 00:13:30.161 16:12:28 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.161 16:12:28 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.161 16:12:28 -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.161 16:12:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:30.161 16:12:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.161 16:12:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:30.161 16:12:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.161 16:12:28 -- common/autotest_common.sh@1210 -- # return 0 00:13:30.161 16:12:28 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.161 16:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.161 16:12:28 -- common/autotest_common.sh@10 -- # set +x 00:13:30.161 16:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.161 16:12:28 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:30.161 16:12:28 -- target/filesystem.sh@101 -- # killprocess 2983655 00:13:30.161 16:12:28 -- common/autotest_common.sh@926 -- # '[' -z 2983655 ']' 00:13:30.161 16:12:28 -- common/autotest_common.sh@930 -- # kill -0 2983655 00:13:30.161 16:12:28 -- common/autotest_common.sh@931 -- # uname 00:13:30.161 16:12:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.161 16:12:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2983655 00:13:30.161 16:12:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:30.161 16:12:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:30.161 16:12:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2983655' 00:13:30.161 killing process with pid 2983655 00:13:30.161 16:12:29 -- common/autotest_common.sh@945 -- # kill 2983655 00:13:30.161 16:12:29 -- common/autotest_common.sh@950 -- # wait 2983655 00:13:31.105 16:12:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:13:31.105 00:13:31.105 real 0m11.744s 00:13:31.105 user 0m45.021s 00:13:31.105 sys 0m1.047s 00:13:31.105 16:12:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.105 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:13:31.105 ************************************ 00:13:31.105 END TEST nvmf_filesystem_in_capsule 00:13:31.105 ************************************ 00:13:31.105 16:12:30 -- target/filesystem.sh@108 -- # nvmftestfini 00:13:31.105 16:12:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:31.105 16:12:30 -- nvmf/common.sh@116 -- # sync 00:13:31.105 16:12:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:31.105 16:12:30 -- nvmf/common.sh@119 -- # set +e 00:13:31.105 16:12:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:31.105 16:12:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:31.105 rmmod nvme_tcp 00:13:31.366 rmmod nvme_fabrics 00:13:31.366 rmmod nvme_keyring 00:13:31.366 16:12:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:31.366 16:12:30 -- nvmf/common.sh@123 -- # set -e 00:13:31.366 16:12:30 -- nvmf/common.sh@124 -- # return 0 00:13:31.366 16:12:30 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:13:31.366 16:12:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:31.366 16:12:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:31.366 16:12:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:31.366 16:12:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.366 16:12:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:31.366 16:12:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.366 16:12:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.366 16:12:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.281 16:12:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:33.281 00:13:33.281 real 0m33.622s 00:13:33.281 user 1m37.108s 00:13:33.281 sys 0m6.923s 00:13:33.281 16:12:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.281 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:13:33.281 ************************************ 00:13:33.281 END TEST nvmf_filesystem 00:13:33.281 ************************************ 00:13:33.281 16:12:32 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:33.281 16:12:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:33.281 16:12:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:33.281 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:13:33.281 ************************************ 00:13:33.281 START TEST nvmf_discovery 00:13:33.281 ************************************ 00:13:33.281 16:12:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:33.540 * Looking for test storage... 00:13:33.540 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:33.540 16:12:32 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.540 16:12:32 -- nvmf/common.sh@7 -- # uname -s 00:13:33.540 16:12:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.540 16:12:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.540 16:12:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.540 16:12:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.540 16:12:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.540 16:12:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.540 16:12:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.540 16:12:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.540 16:12:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.540 16:12:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.540 16:12:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:33.540 16:12:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:33.540 16:12:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.540 16:12:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.540 16:12:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:33.540 16:12:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:33.540 16:12:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.540 16:12:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.540 16:12:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.540 16:12:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.540 16:12:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.540 16:12:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.540 16:12:32 -- paths/export.sh@5 -- # export PATH 00:13:33.540 16:12:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.540 16:12:32 -- nvmf/common.sh@46 -- # : 0 00:13:33.540 16:12:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:33.540 16:12:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:33.540 16:12:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:33.540 16:12:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.540 16:12:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.540 16:12:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:33.540 16:12:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:33.540 16:12:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:33.540 16:12:32 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:33.540 16:12:32 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:33.540 16:12:32 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:33.540 16:12:32 -- target/discovery.sh@15 -- # hash nvme 00:13:33.540 16:12:32 -- target/discovery.sh@20 -- # nvmftestinit 00:13:33.540 16:12:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:33.540 16:12:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.540 16:12:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:33.540 16:12:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:33.540 16:12:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:33.540 16:12:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.540 16:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.540 16:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.540 16:12:32 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:33.540 16:12:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:33.540 16:12:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:33.540 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:13:38.818 16:12:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:38.818 16:12:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:38.818 16:12:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:38.818 16:12:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:38.818 16:12:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:38.818 16:12:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:38.818 16:12:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:38.818 16:12:37 -- nvmf/common.sh@294 -- # net_devs=() 00:13:38.818 16:12:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:38.818 16:12:37 -- nvmf/common.sh@295 -- # e810=() 00:13:38.818 16:12:37 -- nvmf/common.sh@295 -- # local -ga e810 00:13:38.818 16:12:37 -- nvmf/common.sh@296 -- # x722=() 00:13:38.818 16:12:37 -- nvmf/common.sh@296 -- # local -ga x722 00:13:38.818 16:12:37 -- nvmf/common.sh@297 -- # mlx=() 00:13:38.818 16:12:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:38.818 16:12:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.818 16:12:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:38.818 16:12:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:38.818 16:12:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.818 16:12:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:38.818 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:38.818 16:12:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:38.818 16:12:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:38.818 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:38.818 16:12:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:38.818 16:12:37 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:38.818 16:12:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.818 16:12:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.818 16:12:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.818 16:12:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.818 16:12:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:38.818 Found net devices under 0000:27:00.0: cvl_0_0 00:13:38.818 16:12:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.818 16:12:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:38.818 16:12:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.818 16:12:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:38.818 16:12:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.818 16:12:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:38.819 Found net devices under 0000:27:00.1: cvl_0_1 00:13:38.819 16:12:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.819 16:12:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:38.819 16:12:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:38.819 16:12:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:38.819 16:12:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:38.819 16:12:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:38.819 16:12:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.819 16:12:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.819 16:12:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.819 16:12:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:38.819 16:12:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.819 16:12:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.819 16:12:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:38.819 16:12:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.819 16:12:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.819 16:12:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:38.819 16:12:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:38.819 16:12:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.819 16:12:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.819 16:12:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.819 16:12:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.819 16:12:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:38.819 16:12:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.819 16:12:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.819 16:12:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.819 16:12:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:38.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:13:38.819 00:13:38.819 --- 10.0.0.2 ping statistics --- 00:13:38.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.819 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:13:38.819 16:12:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:13:38.819 00:13:38.819 --- 10.0.0.1 ping statistics --- 00:13:38.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.819 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:13:38.819 16:12:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.819 16:12:37 -- nvmf/common.sh@410 -- # return 0 00:13:38.819 16:12:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:38.819 16:12:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.819 16:12:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:38.819 16:12:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:38.819 16:12:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.819 16:12:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:38.819 16:12:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:38.819 16:12:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:38.819 16:12:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:38.819 16:12:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:38.819 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:13:38.819 16:12:37 -- nvmf/common.sh@469 -- # nvmfpid=2990015 00:13:38.819 16:12:37 -- nvmf/common.sh@470 -- # waitforlisten 2990015 00:13:38.819 16:12:37 -- common/autotest_common.sh@819 -- # '[' -z 2990015 ']' 00:13:38.819 16:12:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.819 16:12:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.819 16:12:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.819 16:12:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.819 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:13:38.819 16:12:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.819 [2024-04-23 16:12:37.587377] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:13:38.819 [2024-04-23 16:12:37.587497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.819 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.819 [2024-04-23 16:12:37.725036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.080 [2024-04-23 16:12:37.834350] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.080 [2024-04-23 16:12:37.834536] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.080 [2024-04-23 16:12:37.834552] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.080 [2024-04-23 16:12:37.834563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.080 [2024-04-23 16:12:37.834643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.080 [2024-04-23 16:12:37.834783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.080 [2024-04-23 16:12:37.834895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.080 [2024-04-23 16:12:37.834907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.649 16:12:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.649 16:12:38 -- common/autotest_common.sh@852 -- # return 0 00:13:39.649 16:12:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:39.649 16:12:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:39.649 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.649 16:12:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.649 16:12:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.649 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.649 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.649 [2024-04-23 16:12:38.346610] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.649 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.649 16:12:38 -- target/discovery.sh@26 -- # seq 1 4 00:13:39.649 16:12:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.649 16:12:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:39.649 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.649 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.649 Null1 00:13:39.649 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.649 16:12:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:39.649 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.649 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.649 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.649 16:12:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:39.649 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 [2024-04-23 16:12:38.390821] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.650 16:12:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 Null2 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.650 16:12:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 Null3 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:39.650 16:12:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 Null4 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.650 16:12:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 4420 00:13:39.650 00:13:39.650 Discovery Log Number of Records 6, Generation counter 6 00:13:39.650 =====Discovery Log Entry 0====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: current discovery subsystem 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4420 00:13:39.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: explicit discovery connections, duplicate discovery information 00:13:39.650 sectype: none 00:13:39.650 =====Discovery Log Entry 1====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: nvme subsystem 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4420 00:13:39.650 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: none 00:13:39.650 sectype: none 00:13:39.650 =====Discovery Log Entry 2====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: nvme subsystem 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4420 00:13:39.650 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: none 00:13:39.650 sectype: none 00:13:39.650 =====Discovery Log Entry 3====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: nvme subsystem 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4420 00:13:39.650 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: none 00:13:39.650 sectype: none 00:13:39.650 =====Discovery Log Entry 4====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: nvme subsystem 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4420 00:13:39.650 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: none 00:13:39.650 sectype: none 00:13:39.650 =====Discovery Log Entry 5====== 00:13:39.650 trtype: tcp 00:13:39.650 adrfam: ipv4 00:13:39.650 subtype: discovery subsystem referral 00:13:39.650 treq: not required 00:13:39.650 portid: 0 00:13:39.650 trsvcid: 4430 00:13:39.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:39.650 traddr: 10.0.0.2 00:13:39.650 eflags: none 00:13:39.650 sectype: none 00:13:39.650 16:12:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:39.650 Perform nvmf subsystem discovery via RPC 00:13:39.650 16:12:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:39.650 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.650 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 [2024-04-23 16:12:38.570856] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:39.650 [ 00:13:39.650 { 00:13:39.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.650 "subtype": "Discovery", 00:13:39.650 "listen_addresses": [ 00:13:39.650 { 00:13:39.650 "transport": "TCP", 00:13:39.650 "trtype": "TCP", 00:13:39.650 "adrfam": "IPv4", 00:13:39.650 "traddr": "10.0.0.2", 00:13:39.650 "trsvcid": "4420" 00:13:39.650 } 00:13:39.650 ], 00:13:39.650 "allow_any_host": true, 00:13:39.650 "hosts": [] 00:13:39.650 }, 00:13:39.650 { 00:13:39.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.650 "subtype": "NVMe", 00:13:39.650 "listen_addresses": [ 00:13:39.650 { 00:13:39.650 "transport": "TCP", 00:13:39.650 "trtype": "TCP", 00:13:39.650 "adrfam": "IPv4", 00:13:39.650 "traddr": "10.0.0.2", 00:13:39.650 "trsvcid": "4420" 00:13:39.650 } 00:13:39.650 ], 00:13:39.650 "allow_any_host": true, 00:13:39.650 "hosts": [], 00:13:39.650 "serial_number": "SPDK00000000000001", 00:13:39.650 "model_number": "SPDK bdev Controller", 00:13:39.650 "max_namespaces": 32, 00:13:39.650 "min_cntlid": 1, 00:13:39.650 "max_cntlid": 65519, 00:13:39.650 "namespaces": [ 00:13:39.650 { 00:13:39.650 "nsid": 1, 00:13:39.650 "bdev_name": "Null1", 00:13:39.650 "name": "Null1", 00:13:39.650 "nguid": "72FD55E75D8B42B792543225DB243B10", 00:13:39.650 "uuid": "72fd55e7-5d8b-42b7-9254-3225db243b10" 00:13:39.650 } 00:13:39.650 ] 00:13:39.650 }, 00:13:39.650 { 00:13:39.650 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:39.650 "subtype": "NVMe", 00:13:39.650 "listen_addresses": [ 00:13:39.650 { 00:13:39.650 "transport": "TCP", 00:13:39.650 "trtype": "TCP", 00:13:39.650 "adrfam": "IPv4", 00:13:39.650 "traddr": "10.0.0.2", 00:13:39.650 "trsvcid": "4420" 00:13:39.650 } 00:13:39.650 ], 00:13:39.650 "allow_any_host": true, 00:13:39.650 "hosts": [], 00:13:39.650 "serial_number": "SPDK00000000000002", 00:13:39.650 "model_number": "SPDK bdev Controller", 00:13:39.650 "max_namespaces": 32, 00:13:39.650 "min_cntlid": 1, 00:13:39.650 "max_cntlid": 65519, 00:13:39.650 "namespaces": [ 00:13:39.650 { 00:13:39.651 "nsid": 1, 00:13:39.651 "bdev_name": "Null2", 00:13:39.651 "name": "Null2", 00:13:39.651 "nguid": "B11F9CFD252C472D899095DA93A402E8", 00:13:39.651 "uuid": "b11f9cfd-252c-472d-8990-95da93a402e8" 00:13:39.651 } 00:13:39.651 ] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:39.651 "subtype": "NVMe", 00:13:39.651 "listen_addresses": [ 00:13:39.651 { 00:13:39.651 "transport": "TCP", 00:13:39.651 "trtype": "TCP", 00:13:39.651 "adrfam": "IPv4", 00:13:39.651 "traddr": "10.0.0.2", 00:13:39.651 "trsvcid": "4420" 00:13:39.651 } 00:13:39.651 ], 00:13:39.651 "allow_any_host": true, 00:13:39.651 "hosts": [], 00:13:39.651 "serial_number": "SPDK00000000000003", 00:13:39.651 "model_number": "SPDK bdev Controller", 00:13:39.651 "max_namespaces": 32, 00:13:39.651 "min_cntlid": 1, 00:13:39.651 "max_cntlid": 65519, 00:13:39.651 "namespaces": [ 00:13:39.651 { 00:13:39.651 "nsid": 1, 00:13:39.651 "bdev_name": "Null3", 00:13:39.911 "name": "Null3", 00:13:39.911 "nguid": "76098C41156F4D83BDCB9429FE663DCA", 00:13:39.911 "uuid": "76098c41-156f-4d83-bdcb-9429fe663dca" 00:13:39.911 } 00:13:39.911 ] 00:13:39.911 }, 00:13:39.911 { 00:13:39.911 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:39.911 "subtype": "NVMe", 00:13:39.911 "listen_addresses": [ 00:13:39.911 { 00:13:39.911 "transport": "TCP", 00:13:39.911 "trtype": "TCP", 00:13:39.911 "adrfam": "IPv4", 00:13:39.911 "traddr": "10.0.0.2", 00:13:39.911 "trsvcid": "4420" 00:13:39.911 } 00:13:39.911 ], 00:13:39.911 "allow_any_host": true, 00:13:39.911 "hosts": [], 00:13:39.911 "serial_number": "SPDK00000000000004", 00:13:39.911 "model_number": "SPDK bdev Controller", 00:13:39.911 "max_namespaces": 32, 00:13:39.911 "min_cntlid": 1, 00:13:39.911 "max_cntlid": 65519, 00:13:39.911 "namespaces": [ 00:13:39.911 { 00:13:39.911 "nsid": 1, 00:13:39.911 "bdev_name": "Null4", 00:13:39.911 "name": "Null4", 00:13:39.911 "nguid": "0F4A192454F14A5D892C0DCD25BAB7F8", 00:13:39.911 "uuid": "0f4a1924-54f1-4a5d-892c-0dcd25bab7f8" 00:13:39.911 } 00:13:39.911 ] 00:13:39.911 } 00:13:39.911 ] 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@42 -- # seq 1 4 00:13:39.911 16:12:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.911 16:12:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.911 16:12:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.911 16:12:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:39.911 16:12:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:39.911 16:12:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:39.911 16:12:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.911 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.911 16:12:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.911 16:12:38 -- target/discovery.sh@49 -- # check_bdevs= 00:13:39.911 16:12:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:39.911 16:12:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:39.911 16:12:38 -- target/discovery.sh@57 -- # nvmftestfini 00:13:39.911 16:12:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.912 16:12:38 -- nvmf/common.sh@116 -- # sync 00:13:39.912 16:12:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.912 16:12:38 -- nvmf/common.sh@119 -- # set +e 00:13:39.912 16:12:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.912 16:12:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.912 rmmod nvme_tcp 00:13:39.912 rmmod nvme_fabrics 00:13:39.912 rmmod nvme_keyring 00:13:39.912 16:12:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.912 16:12:38 -- nvmf/common.sh@123 -- # set -e 00:13:39.912 16:12:38 -- nvmf/common.sh@124 -- # return 0 00:13:39.912 16:12:38 -- nvmf/common.sh@477 -- # '[' -n 2990015 ']' 00:13:39.912 16:12:38 -- nvmf/common.sh@478 -- # killprocess 2990015 00:13:39.912 16:12:38 -- common/autotest_common.sh@926 -- # '[' -z 2990015 ']' 00:13:39.912 16:12:38 -- common/autotest_common.sh@930 -- # kill -0 2990015 00:13:39.912 16:12:38 -- common/autotest_common.sh@931 -- # uname 00:13:39.912 16:12:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.912 16:12:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2990015 00:13:39.912 16:12:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:39.912 16:12:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:39.912 16:12:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2990015' 00:13:39.912 killing process with pid 2990015 00:13:39.912 16:12:38 -- common/autotest_common.sh@945 -- # kill 2990015 00:13:39.912 [2024-04-23 16:12:38.776390] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:39.912 16:12:38 -- common/autotest_common.sh@950 -- # wait 2990015 00:13:40.485 16:12:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:40.485 16:12:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:40.485 16:12:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:40.485 16:12:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.485 16:12:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:40.485 16:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.485 16:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.485 16:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.394 16:12:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:42.394 00:13:42.394 real 0m9.100s 00:13:42.394 user 0m6.716s 00:13:42.394 sys 0m4.194s 00:13:42.394 16:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.394 16:12:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.394 ************************************ 00:13:42.394 END TEST nvmf_discovery 00:13:42.394 ************************************ 00:13:42.654 16:12:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:42.654 16:12:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:42.654 16:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.654 16:12:41 -- common/autotest_common.sh@10 -- # set +x 00:13:42.654 ************************************ 00:13:42.654 START TEST nvmf_referrals 00:13:42.654 ************************************ 00:13:42.654 16:12:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:42.654 * Looking for test storage... 00:13:42.654 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:42.654 16:12:41 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.654 16:12:41 -- nvmf/common.sh@7 -- # uname -s 00:13:42.654 16:12:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.654 16:12:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.654 16:12:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.654 16:12:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.654 16:12:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.654 16:12:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.654 16:12:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.654 16:12:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.654 16:12:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.654 16:12:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.654 16:12:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:42.654 16:12:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:42.654 16:12:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.654 16:12:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.654 16:12:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:42.654 16:12:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:42.654 16:12:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.654 16:12:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.654 16:12:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.654 16:12:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.654 16:12:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.654 16:12:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.654 16:12:41 -- paths/export.sh@5 -- # export PATH 00:13:42.654 16:12:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.654 16:12:41 -- nvmf/common.sh@46 -- # : 0 00:13:42.654 16:12:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.654 16:12:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.654 16:12:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.654 16:12:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.655 16:12:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.655 16:12:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.655 16:12:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.655 16:12:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.655 16:12:41 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:42.655 16:12:41 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:42.655 16:12:41 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:42.655 16:12:41 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:42.655 16:12:41 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:42.655 16:12:41 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:42.655 16:12:41 -- target/referrals.sh@37 -- # nvmftestinit 00:13:42.655 16:12:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:42.655 16:12:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.655 16:12:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:42.655 16:12:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:42.655 16:12:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:42.655 16:12:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.655 16:12:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.655 16:12:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.655 16:12:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:42.655 16:12:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:42.655 16:12:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:42.655 16:12:41 -- common/autotest_common.sh@10 -- # set +x 00:13:47.944 16:12:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.944 16:12:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:47.944 16:12:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:47.944 16:12:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:47.944 16:12:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:47.944 16:12:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:47.944 16:12:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:47.944 16:12:46 -- nvmf/common.sh@294 -- # net_devs=() 00:13:47.944 16:12:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:47.944 16:12:46 -- nvmf/common.sh@295 -- # e810=() 00:13:47.944 16:12:46 -- nvmf/common.sh@295 -- # local -ga e810 00:13:47.944 16:12:46 -- nvmf/common.sh@296 -- # x722=() 00:13:47.944 16:12:46 -- nvmf/common.sh@296 -- # local -ga x722 00:13:47.944 16:12:46 -- nvmf/common.sh@297 -- # mlx=() 00:13:47.944 16:12:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:47.944 16:12:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.944 16:12:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:47.944 16:12:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.944 16:12:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:47.944 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:47.944 16:12:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.944 16:12:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:47.944 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:47.944 16:12:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.944 16:12:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.944 16:12:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.944 16:12:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:47.944 Found net devices under 0000:27:00.0: cvl_0_0 00:13:47.944 16:12:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.944 16:12:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.944 16:12:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.944 16:12:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.944 16:12:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:47.944 Found net devices under 0000:27:00.1: cvl_0_1 00:13:47.944 16:12:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.944 16:12:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:47.944 16:12:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:47.944 16:12:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:47.944 16:12:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.944 16:12:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.944 16:12:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.944 16:12:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:47.944 16:12:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.944 16:12:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.944 16:12:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:47.944 16:12:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.944 16:12:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.944 16:12:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:47.944 16:12:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:47.944 16:12:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.944 16:12:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.944 16:12:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.944 16:12:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.944 16:12:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:47.944 16:12:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.944 16:12:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.944 16:12:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.944 16:12:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:47.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:13:47.944 00:13:47.945 --- 10.0.0.2 ping statistics --- 00:13:47.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.945 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:13:47.945 16:12:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.710 ms 00:13:47.945 00:13:47.945 --- 10.0.0.1 ping statistics --- 00:13:47.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.945 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:13:47.945 16:12:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.204 16:12:46 -- nvmf/common.sh@410 -- # return 0 00:13:48.204 16:12:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:48.204 16:12:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.204 16:12:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:48.204 16:12:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:48.204 16:12:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.204 16:12:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:48.204 16:12:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:48.204 16:12:46 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:48.204 16:12:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:48.204 16:12:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:48.204 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.204 16:12:46 -- nvmf/common.sh@469 -- # nvmfpid=2994236 00:13:48.204 16:12:46 -- nvmf/common.sh@470 -- # waitforlisten 2994236 00:13:48.204 16:12:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.204 16:12:46 -- common/autotest_common.sh@819 -- # '[' -z 2994236 ']' 00:13:48.204 16:12:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.204 16:12:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:48.204 16:12:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.204 16:12:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:48.204 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.204 [2024-04-23 16:12:46.982611] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:13:48.204 [2024-04-23 16:12:46.982733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.204 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.204 [2024-04-23 16:12:47.110896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.463 [2024-04-23 16:12:47.219363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.463 [2024-04-23 16:12:47.219546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.463 [2024-04-23 16:12:47.219560] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.463 [2024-04-23 16:12:47.219570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.463 [2024-04-23 16:12:47.219652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.463 [2024-04-23 16:12:47.219800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.463 [2024-04-23 16:12:47.219830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.463 [2024-04-23 16:12:47.219839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.038 16:12:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:49.038 16:12:47 -- common/autotest_common.sh@852 -- # return 0 00:13:49.038 16:12:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:49.038 16:12:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 16:12:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.038 16:12:47 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 [2024-04-23 16:12:47.718174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 [2024-04-23 16:12:47.734403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.038 16:12:47 -- target/referrals.sh@48 -- # jq length 00:13:49.038 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.038 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.038 16:12:47 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:49.038 16:12:47 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:49.038 16:12:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.038 16:12:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.038 16:12:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.039 16:12:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.039 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.039 16:12:47 -- target/referrals.sh@21 -- # sort 00:13:49.039 16:12:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.039 16:12:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:49.039 16:12:47 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:49.039 16:12:47 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:49.039 16:12:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.039 16:12:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.039 16:12:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.039 16:12:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.039 16:12:47 -- target/referrals.sh@26 -- # sort 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:49.385 16:12:48 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.385 16:12:48 -- target/referrals.sh@56 -- # jq length 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:49.385 16:12:48 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:49.385 16:12:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.385 16:12:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # sort 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # echo 00:13:49.385 16:12:48 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:49.385 16:12:48 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:49.385 16:12:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.385 16:12:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.385 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.385 16:12:48 -- target/referrals.sh@21 -- # sort 00:13:49.385 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.385 16:12:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.385 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:49.385 16:12:48 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:49.385 16:12:48 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:49.385 16:12:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.385 16:12:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.385 16:12:48 -- target/referrals.sh@26 -- # sort 00:13:49.711 16:12:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:49.711 16:12:48 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:49.711 16:12:48 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:49.711 16:12:48 -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:49.711 16:12:48 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:49.711 16:12:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.711 16:12:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:49.711 16:12:48 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:49.711 16:12:48 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:49.711 16:12:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:49.711 16:12:48 -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:49.711 16:12:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.711 16:12:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:49.970 16:12:48 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:49.970 16:12:48 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:49.970 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.970 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.970 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.970 16:12:48 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:49.970 16:12:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.970 16:12:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.970 16:12:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.970 16:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.970 16:12:48 -- target/referrals.sh@21 -- # sort 00:13:49.970 16:12:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.970 16:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.970 16:12:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:49.970 16:12:48 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:49.970 16:12:48 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:49.970 16:12:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.970 16:12:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.970 16:12:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.970 16:12:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.970 16:12:48 -- target/referrals.sh@26 -- # sort 00:13:49.970 16:12:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:49.970 16:12:48 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:49.970 16:12:48 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:49.970 16:12:48 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:49.970 16:12:48 -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:49.970 16:12:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.970 16:12:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:50.230 16:12:48 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:50.230 16:12:48 -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:50.230 16:12:48 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:50.230 16:12:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:50.230 16:12:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:50.230 16:12:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.230 16:12:49 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:50.230 16:12:49 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:50.230 16:12:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.230 16:12:49 -- common/autotest_common.sh@10 -- # set +x 00:13:50.230 16:12:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.230 16:12:49 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:50.230 16:12:49 -- target/referrals.sh@82 -- # jq length 00:13:50.230 16:12:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.230 16:12:49 -- common/autotest_common.sh@10 -- # set +x 00:13:50.230 16:12:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.230 16:12:49 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:50.230 16:12:49 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:50.230 16:12:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:50.230 16:12:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:50.230 16:12:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:50.230 16:12:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:50.230 16:12:49 -- target/referrals.sh@26 -- # sort 00:13:50.491 16:12:49 -- target/referrals.sh@26 -- # echo 00:13:50.491 16:12:49 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:50.491 16:12:49 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:50.491 16:12:49 -- target/referrals.sh@86 -- # nvmftestfini 00:13:50.491 16:12:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.491 16:12:49 -- nvmf/common.sh@116 -- # sync 00:13:50.491 16:12:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.491 16:12:49 -- nvmf/common.sh@119 -- # set +e 00:13:50.491 16:12:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.491 16:12:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.491 rmmod nvme_tcp 00:13:50.491 rmmod nvme_fabrics 00:13:50.491 rmmod nvme_keyring 00:13:50.491 16:12:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.491 16:12:49 -- nvmf/common.sh@123 -- # set -e 00:13:50.491 16:12:49 -- nvmf/common.sh@124 -- # return 0 00:13:50.491 16:12:49 -- nvmf/common.sh@477 -- # '[' -n 2994236 ']' 00:13:50.491 16:12:49 -- nvmf/common.sh@478 -- # killprocess 2994236 00:13:50.491 16:12:49 -- common/autotest_common.sh@926 -- # '[' -z 2994236 ']' 00:13:50.491 16:12:49 -- common/autotest_common.sh@930 -- # kill -0 2994236 00:13:50.491 16:12:49 -- common/autotest_common.sh@931 -- # uname 00:13:50.491 16:12:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.491 16:12:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2994236 00:13:50.491 16:12:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:50.491 16:12:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:50.491 16:12:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2994236' 00:13:50.491 killing process with pid 2994236 00:13:50.491 16:12:49 -- common/autotest_common.sh@945 -- # kill 2994236 00:13:50.491 16:12:49 -- common/autotest_common.sh@950 -- # wait 2994236 00:13:51.064 16:12:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.064 16:12:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.064 16:12:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.064 16:12:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.064 16:12:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.064 16:12:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.064 16:12:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.064 16:12:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.972 16:12:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:52.972 00:13:52.972 real 0m10.496s 00:13:52.972 user 0m11.847s 00:13:52.972 sys 0m4.636s 00:13:52.972 16:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.972 16:12:51 -- common/autotest_common.sh@10 -- # set +x 00:13:52.972 ************************************ 00:13:52.972 END TEST nvmf_referrals 00:13:52.972 ************************************ 00:13:52.972 16:12:51 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:52.972 16:12:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:52.972 16:12:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.972 16:12:51 -- common/autotest_common.sh@10 -- # set +x 00:13:52.972 ************************************ 00:13:52.972 START TEST nvmf_connect_disconnect 00:13:52.972 ************************************ 00:13:52.972 16:12:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:53.231 * Looking for test storage... 00:13:53.231 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:53.231 16:12:51 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.231 16:12:51 -- nvmf/common.sh@7 -- # uname -s 00:13:53.231 16:12:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.231 16:12:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.231 16:12:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.231 16:12:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.231 16:12:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.231 16:12:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.231 16:12:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.231 16:12:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.231 16:12:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.231 16:12:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.231 16:12:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:53.231 16:12:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:13:53.231 16:12:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.231 16:12:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.231 16:12:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:53.231 16:12:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:53.231 16:12:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.231 16:12:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.231 16:12:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.231 16:12:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.231 16:12:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.231 16:12:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.231 16:12:51 -- paths/export.sh@5 -- # export PATH 00:13:53.231 16:12:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.231 16:12:51 -- nvmf/common.sh@46 -- # : 0 00:13:53.231 16:12:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:53.231 16:12:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:53.231 16:12:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:53.231 16:12:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.231 16:12:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.231 16:12:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:53.231 16:12:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:53.231 16:12:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:53.231 16:12:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.231 16:12:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.231 16:12:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:53.231 16:12:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:53.231 16:12:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.231 16:12:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:53.231 16:12:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:53.231 16:12:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:53.231 16:12:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.231 16:12:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.232 16:12:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.232 16:12:51 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:13:53.232 16:12:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:53.232 16:12:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:53.232 16:12:51 -- common/autotest_common.sh@10 -- # set +x 00:13:58.511 16:12:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:58.511 16:12:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:58.511 16:12:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:58.511 16:12:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:58.511 16:12:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:58.511 16:12:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:58.511 16:12:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:58.511 16:12:57 -- nvmf/common.sh@294 -- # net_devs=() 00:13:58.511 16:12:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:58.511 16:12:57 -- nvmf/common.sh@295 -- # e810=() 00:13:58.511 16:12:57 -- nvmf/common.sh@295 -- # local -ga e810 00:13:58.511 16:12:57 -- nvmf/common.sh@296 -- # x722=() 00:13:58.511 16:12:57 -- nvmf/common.sh@296 -- # local -ga x722 00:13:58.511 16:12:57 -- nvmf/common.sh@297 -- # mlx=() 00:13:58.511 16:12:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:58.511 16:12:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.511 16:12:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:58.511 16:12:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:58.511 16:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:58.511 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:58.511 16:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:58.511 16:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:58.511 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:58.511 16:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:58.511 16:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.511 16:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.511 16:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:58.511 Found net devices under 0000:27:00.0: cvl_0_0 00:13:58.511 16:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.511 16:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:58.511 16:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.511 16:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.511 16:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:58.511 Found net devices under 0000:27:00.1: cvl_0_1 00:13:58.511 16:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.511 16:12:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:58.511 16:12:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:58.511 16:12:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.511 16:12:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.511 16:12:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.511 16:12:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:58.511 16:12:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.511 16:12:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.511 16:12:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:58.511 16:12:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.511 16:12:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.511 16:12:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:58.511 16:12:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:58.511 16:12:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.511 16:12:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.511 16:12:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.511 16:12:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.511 16:12:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:58.511 16:12:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.511 16:12:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.511 16:12:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.511 16:12:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:58.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:13:58.511 00:13:58.511 --- 10.0.0.2 ping statistics --- 00:13:58.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.511 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:58.511 16:12:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:58.511 00:13:58.511 --- 10.0.0.1 ping statistics --- 00:13:58.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.511 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:58.511 16:12:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.511 16:12:57 -- nvmf/common.sh@410 -- # return 0 00:13:58.511 16:12:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:58.511 16:12:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.511 16:12:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:58.511 16:12:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.511 16:12:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:58.511 16:12:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:58.511 16:12:57 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:58.511 16:12:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:58.511 16:12:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:58.512 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:13:58.512 16:12:57 -- nvmf/common.sh@469 -- # nvmfpid=2998831 00:13:58.512 16:12:57 -- nvmf/common.sh@470 -- # waitforlisten 2998831 00:13:58.512 16:12:57 -- common/autotest_common.sh@819 -- # '[' -z 2998831 ']' 00:13:58.512 16:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.512 16:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:58.512 16:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.512 16:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:58.512 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:13:58.512 16:12:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.512 [2024-04-23 16:12:57.385148] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:13:58.512 [2024-04-23 16:12:57.385250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.772 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.772 [2024-04-23 16:12:57.506864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.772 [2024-04-23 16:12:57.610581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.772 [2024-04-23 16:12:57.610788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.772 [2024-04-23 16:12:57.610803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.772 [2024-04-23 16:12:57.610812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.772 [2024-04-23 16:12:57.610886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.772 [2024-04-23 16:12:57.611018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.772 [2024-04-23 16:12:57.611125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.772 [2024-04-23 16:12:57.611136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.342 16:12:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:59.342 16:12:58 -- common/autotest_common.sh@852 -- # return 0 00:13:59.342 16:12:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:59.342 16:12:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 16:12:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:59.342 16:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 [2024-04-23 16:12:58.130541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.342 16:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:59.342 16:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 16:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:59.342 16:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 16:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.342 16:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 16:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.342 16:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.342 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:13:59.342 [2024-04-23 16:12:58.201361] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.342 16:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:59.342 16:12:58 -- target/connect_disconnect.sh@34 -- # set +x 00:14:01.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.410 16:16:47 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:49.410 16:16:47 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:49.410 16:16:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:49.410 16:16:47 -- nvmf/common.sh@116 -- # sync 00:17:49.410 16:16:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:49.410 16:16:47 -- nvmf/common.sh@119 -- # set +e 00:17:49.410 16:16:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:49.410 16:16:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:49.410 rmmod nvme_tcp 00:17:49.410 rmmod nvme_fabrics 00:17:49.410 rmmod nvme_keyring 00:17:49.410 16:16:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:49.410 16:16:47 -- nvmf/common.sh@123 -- # set -e 00:17:49.410 16:16:47 -- nvmf/common.sh@124 -- # return 0 00:17:49.410 16:16:47 -- nvmf/common.sh@477 -- # '[' -n 2998831 ']' 00:17:49.410 16:16:47 -- nvmf/common.sh@478 -- # killprocess 2998831 00:17:49.410 16:16:47 -- common/autotest_common.sh@926 -- # '[' -z 2998831 ']' 00:17:49.410 16:16:47 -- common/autotest_common.sh@930 -- # kill -0 2998831 00:17:49.410 16:16:47 -- common/autotest_common.sh@931 -- # uname 00:17:49.410 16:16:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.410 16:16:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2998831 00:17:49.410 16:16:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:49.410 16:16:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:49.410 16:16:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2998831' 00:17:49.410 killing process with pid 2998831 00:17:49.410 16:16:47 -- common/autotest_common.sh@945 -- # kill 2998831 00:17:49.410 16:16:47 -- common/autotest_common.sh@950 -- # wait 2998831 00:17:49.668 16:16:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:49.668 16:16:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:49.668 16:16:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:49.668 16:16:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.668 16:16:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:49.668 16:16:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.668 16:16:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.668 16:16:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.692 16:16:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:51.692 00:17:51.692 real 3m58.567s 00:17:51.692 user 15m16.701s 00:17:51.692 sys 0m14.142s 00:17:51.692 16:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.692 16:16:50 -- common/autotest_common.sh@10 -- # set +x 00:17:51.692 ************************************ 00:17:51.692 END TEST nvmf_connect_disconnect 00:17:51.692 ************************************ 00:17:51.692 16:16:50 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:51.692 16:16:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:51.692 16:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.692 16:16:50 -- common/autotest_common.sh@10 -- # set +x 00:17:51.692 ************************************ 00:17:51.692 START TEST nvmf_multitarget 00:17:51.692 ************************************ 00:17:51.692 16:16:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:51.692 * Looking for test storage... 00:17:51.692 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:51.692 16:16:50 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.692 16:16:50 -- nvmf/common.sh@7 -- # uname -s 00:17:51.692 16:16:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.692 16:16:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.693 16:16:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.693 16:16:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.693 16:16:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.693 16:16:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.693 16:16:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.693 16:16:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.693 16:16:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.693 16:16:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.693 16:16:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:51.693 16:16:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:51.693 16:16:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.693 16:16:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.693 16:16:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:51.693 16:16:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:51.693 16:16:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.693 16:16:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.693 16:16:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.693 16:16:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.693 16:16:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.693 16:16:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.693 16:16:50 -- paths/export.sh@5 -- # export PATH 00:17:51.693 16:16:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.693 16:16:50 -- nvmf/common.sh@46 -- # : 0 00:17:51.693 16:16:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:51.693 16:16:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:51.693 16:16:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:51.693 16:16:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.693 16:16:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.693 16:16:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:51.693 16:16:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:51.693 16:16:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:51.693 16:16:50 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:51.693 16:16:50 -- target/multitarget.sh@15 -- # nvmftestinit 00:17:51.693 16:16:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:51.693 16:16:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.693 16:16:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:51.693 16:16:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:51.693 16:16:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:51.693 16:16:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.693 16:16:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.693 16:16:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.693 16:16:50 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:17:51.693 16:16:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:51.693 16:16:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:51.693 16:16:50 -- common/autotest_common.sh@10 -- # set +x 00:17:56.976 16:16:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:56.976 16:16:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:56.976 16:16:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:56.976 16:16:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:56.976 16:16:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:56.976 16:16:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:56.976 16:16:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:56.976 16:16:55 -- nvmf/common.sh@294 -- # net_devs=() 00:17:56.976 16:16:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:56.976 16:16:55 -- nvmf/common.sh@295 -- # e810=() 00:17:56.976 16:16:55 -- nvmf/common.sh@295 -- # local -ga e810 00:17:56.976 16:16:55 -- nvmf/common.sh@296 -- # x722=() 00:17:56.976 16:16:55 -- nvmf/common.sh@296 -- # local -ga x722 00:17:56.976 16:16:55 -- nvmf/common.sh@297 -- # mlx=() 00:17:56.976 16:16:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:56.976 16:16:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.976 16:16:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:56.976 16:16:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.976 16:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:56.976 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:56.976 16:16:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:56.976 16:16:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:56.976 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:56.976 16:16:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.976 16:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.976 16:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.976 16:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:56.976 Found net devices under 0000:27:00.0: cvl_0_0 00:17:56.976 16:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.976 16:16:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:56.976 16:16:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.976 16:16:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.976 16:16:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:56.976 Found net devices under 0000:27:00.1: cvl_0_1 00:17:56.976 16:16:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.976 16:16:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:56.976 16:16:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:56.976 16:16:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:56.976 16:16:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.976 16:16:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.976 16:16:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.976 16:16:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:56.976 16:16:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.977 16:16:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.977 16:16:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:56.977 16:16:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.977 16:16:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.977 16:16:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:56.977 16:16:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:56.977 16:16:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.977 16:16:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.238 16:16:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.238 16:16:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.238 16:16:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:57.238 16:16:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.238 16:16:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.238 16:16:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.238 16:16:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:57.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:57.238 00:17:57.238 --- 10.0.0.2 ping statistics --- 00:17:57.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.238 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:57.238 16:16:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:57.238 00:17:57.238 --- 10.0.0.1 ping statistics --- 00:17:57.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.238 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:57.238 16:16:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.238 16:16:56 -- nvmf/common.sh@410 -- # return 0 00:17:57.238 16:16:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.238 16:16:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.238 16:16:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.238 16:16:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.238 16:16:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.238 16:16:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.238 16:16:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.238 16:16:56 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:57.238 16:16:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.238 16:16:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:57.238 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:17:57.239 16:16:56 -- nvmf/common.sh@469 -- # nvmfpid=3048822 00:17:57.239 16:16:56 -- nvmf/common.sh@470 -- # waitforlisten 3048822 00:17:57.239 16:16:56 -- common/autotest_common.sh@819 -- # '[' -z 3048822 ']' 00:17:57.239 16:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.239 16:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.239 16:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.239 16:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.239 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:17:57.239 16:16:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.500 [2024-04-23 16:16:56.188302] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:17:57.500 [2024-04-23 16:16:56.188410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.500 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.500 [2024-04-23 16:16:56.313674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.500 [2024-04-23 16:16:56.413263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.500 [2024-04-23 16:16:56.413444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.500 [2024-04-23 16:16:56.413461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.500 [2024-04-23 16:16:56.413471] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.500 [2024-04-23 16:16:56.413553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.500 [2024-04-23 16:16:56.413681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.500 [2024-04-23 16:16:56.413716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.500 [2024-04-23 16:16:56.413725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.070 16:16:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.070 16:16:56 -- common/autotest_common.sh@852 -- # return 0 00:17:58.070 16:16:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.070 16:16:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:58.070 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:17:58.070 16:16:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.070 16:16:56 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:58.070 16:16:56 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.070 16:16:56 -- target/multitarget.sh@21 -- # jq length 00:17:58.329 16:16:57 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:58.329 16:16:57 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:58.329 "nvmf_tgt_1" 00:17:58.329 16:16:57 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:58.329 "nvmf_tgt_2" 00:17:58.329 16:16:57 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.329 16:16:57 -- target/multitarget.sh@28 -- # jq length 00:17:58.588 16:16:57 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:58.588 16:16:57 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:58.588 true 00:17:58.588 16:16:57 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:58.588 true 00:17:58.588 16:16:57 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:58.588 16:16:57 -- target/multitarget.sh@35 -- # jq length 00:17:58.588 16:16:57 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:58.588 16:16:57 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:58.588 16:16:57 -- target/multitarget.sh@41 -- # nvmftestfini 00:17:58.588 16:16:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.588 16:16:57 -- nvmf/common.sh@116 -- # sync 00:17:58.847 16:16:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.847 16:16:57 -- nvmf/common.sh@119 -- # set +e 00:17:58.847 16:16:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.847 16:16:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.847 rmmod nvme_tcp 00:17:58.847 rmmod nvme_fabrics 00:17:58.847 rmmod nvme_keyring 00:17:58.847 16:16:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.847 16:16:57 -- nvmf/common.sh@123 -- # set -e 00:17:58.847 16:16:57 -- nvmf/common.sh@124 -- # return 0 00:17:58.847 16:16:57 -- nvmf/common.sh@477 -- # '[' -n 3048822 ']' 00:17:58.847 16:16:57 -- nvmf/common.sh@478 -- # killprocess 3048822 00:17:58.847 16:16:57 -- common/autotest_common.sh@926 -- # '[' -z 3048822 ']' 00:17:58.847 16:16:57 -- common/autotest_common.sh@930 -- # kill -0 3048822 00:17:58.847 16:16:57 -- common/autotest_common.sh@931 -- # uname 00:17:58.847 16:16:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.847 16:16:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3048822 00:17:58.847 16:16:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:58.847 16:16:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:58.847 16:16:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3048822' 00:17:58.847 killing process with pid 3048822 00:17:58.847 16:16:57 -- common/autotest_common.sh@945 -- # kill 3048822 00:17:58.847 16:16:57 -- common/autotest_common.sh@950 -- # wait 3048822 00:17:59.416 16:16:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.416 16:16:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.416 16:16:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.416 16:16:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.416 16:16:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.416 16:16:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.416 16:16:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.416 16:16:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.322 16:17:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:01.322 00:18:01.322 real 0m9.654s 00:18:01.322 user 0m8.401s 00:18:01.322 sys 0m4.534s 00:18:01.322 16:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.322 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:18:01.322 ************************************ 00:18:01.322 END TEST nvmf_multitarget 00:18:01.322 ************************************ 00:18:01.322 16:17:00 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:01.322 16:17:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.322 16:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.322 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:18:01.322 ************************************ 00:18:01.322 START TEST nvmf_rpc 00:18:01.322 ************************************ 00:18:01.322 16:17:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:01.322 * Looking for test storage... 00:18:01.322 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:01.322 16:17:00 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.581 16:17:00 -- nvmf/common.sh@7 -- # uname -s 00:18:01.581 16:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.581 16:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.581 16:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.581 16:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.581 16:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.581 16:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.581 16:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.581 16:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.581 16:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.581 16:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.581 16:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:01.581 16:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:01.581 16:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.581 16:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.581 16:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:01.581 16:17:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:01.581 16:17:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.581 16:17:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.581 16:17:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.581 16:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.581 16:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.581 16:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.581 16:17:00 -- paths/export.sh@5 -- # export PATH 00:18:01.581 16:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.581 16:17:00 -- nvmf/common.sh@46 -- # : 0 00:18:01.581 16:17:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.581 16:17:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.581 16:17:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.581 16:17:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.581 16:17:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.581 16:17:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.581 16:17:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.581 16:17:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.581 16:17:00 -- target/rpc.sh@11 -- # loops=5 00:18:01.581 16:17:00 -- target/rpc.sh@23 -- # nvmftestinit 00:18:01.581 16:17:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.581 16:17:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.581 16:17:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.581 16:17:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.581 16:17:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.581 16:17:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.581 16:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.581 16:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.581 16:17:00 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:01.581 16:17:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.581 16:17:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.581 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:18:08.159 16:17:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:08.159 16:17:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:08.159 16:17:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:08.159 16:17:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:08.159 16:17:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:08.159 16:17:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:08.159 16:17:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:08.159 16:17:06 -- nvmf/common.sh@294 -- # net_devs=() 00:18:08.159 16:17:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:08.159 16:17:06 -- nvmf/common.sh@295 -- # e810=() 00:18:08.159 16:17:06 -- nvmf/common.sh@295 -- # local -ga e810 00:18:08.159 16:17:06 -- nvmf/common.sh@296 -- # x722=() 00:18:08.159 16:17:06 -- nvmf/common.sh@296 -- # local -ga x722 00:18:08.159 16:17:06 -- nvmf/common.sh@297 -- # mlx=() 00:18:08.159 16:17:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:08.159 16:17:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.159 16:17:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:08.159 16:17:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.159 16:17:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:08.159 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:08.159 16:17:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:08.159 16:17:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:08.159 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:08.159 16:17:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.159 16:17:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.159 16:17:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.159 16:17:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:08.159 Found net devices under 0000:27:00.0: cvl_0_0 00:18:08.159 16:17:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.159 16:17:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:08.159 16:17:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.159 16:17:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.159 16:17:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:08.159 Found net devices under 0000:27:00.1: cvl_0_1 00:18:08.159 16:17:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.159 16:17:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:08.159 16:17:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:08.159 16:17:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.159 16:17:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.159 16:17:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.159 16:17:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:08.159 16:17:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.159 16:17:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.159 16:17:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:08.159 16:17:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.159 16:17:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.159 16:17:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:08.159 16:17:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:08.159 16:17:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.159 16:17:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.159 16:17:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.159 16:17:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.159 16:17:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:08.159 16:17:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.159 16:17:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.159 16:17:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.159 16:17:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:08.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:18:08.159 00:18:08.159 --- 10.0.0.2 ping statistics --- 00:18:08.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.159 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:18:08.159 16:17:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:18:08.159 00:18:08.159 --- 10.0.0.1 ping statistics --- 00:18:08.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.159 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:08.159 16:17:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.159 16:17:06 -- nvmf/common.sh@410 -- # return 0 00:18:08.159 16:17:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.159 16:17:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.159 16:17:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:08.159 16:17:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.159 16:17:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:08.159 16:17:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:08.159 16:17:06 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:08.159 16:17:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:08.159 16:17:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:08.159 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:18:08.159 16:17:06 -- nvmf/common.sh@469 -- # nvmfpid=3053350 00:18:08.159 16:17:06 -- nvmf/common.sh@470 -- # waitforlisten 3053350 00:18:08.159 16:17:06 -- common/autotest_common.sh@819 -- # '[' -z 3053350 ']' 00:18:08.159 16:17:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.159 16:17:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:08.159 16:17:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.159 16:17:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:08.159 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:18:08.159 16:17:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:08.159 [2024-04-23 16:17:06.838134] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:08.159 [2024-04-23 16:17:06.838215] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.159 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.159 [2024-04-23 16:17:06.946592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.159 [2024-04-23 16:17:07.047884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:08.159 [2024-04-23 16:17:07.048104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.159 [2024-04-23 16:17:07.048120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.159 [2024-04-23 16:17:07.048130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.159 [2024-04-23 16:17:07.048194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.159 [2024-04-23 16:17:07.048237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.159 [2024-04-23 16:17:07.048340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.159 [2024-04-23 16:17:07.048352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.733 16:17:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:08.733 16:17:07 -- common/autotest_common.sh@852 -- # return 0 00:18:08.733 16:17:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:08.733 16:17:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:08.733 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.733 16:17:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.733 16:17:07 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:08.733 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.733 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.733 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.733 16:17:07 -- target/rpc.sh@26 -- # stats='{ 00:18:08.733 "tick_rate": 1900000000, 00:18:08.733 "poll_groups": [ 00:18:08.733 { 00:18:08.733 "name": "nvmf_tgt_poll_group_0", 00:18:08.733 "admin_qpairs": 0, 00:18:08.733 "io_qpairs": 0, 00:18:08.733 "current_admin_qpairs": 0, 00:18:08.733 "current_io_qpairs": 0, 00:18:08.733 "pending_bdev_io": 0, 00:18:08.733 "completed_nvme_io": 0, 00:18:08.733 "transports": [] 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "nvmf_tgt_poll_group_1", 00:18:08.733 "admin_qpairs": 0, 00:18:08.733 "io_qpairs": 0, 00:18:08.733 "current_admin_qpairs": 0, 00:18:08.733 "current_io_qpairs": 0, 00:18:08.733 "pending_bdev_io": 0, 00:18:08.733 "completed_nvme_io": 0, 00:18:08.733 "transports": [] 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "nvmf_tgt_poll_group_2", 00:18:08.733 "admin_qpairs": 0, 00:18:08.733 "io_qpairs": 0, 00:18:08.733 "current_admin_qpairs": 0, 00:18:08.733 "current_io_qpairs": 0, 00:18:08.733 "pending_bdev_io": 0, 00:18:08.733 "completed_nvme_io": 0, 00:18:08.733 "transports": [] 00:18:08.733 }, 00:18:08.733 { 00:18:08.733 "name": "nvmf_tgt_poll_group_3", 00:18:08.733 "admin_qpairs": 0, 00:18:08.733 "io_qpairs": 0, 00:18:08.733 "current_admin_qpairs": 0, 00:18:08.733 "current_io_qpairs": 0, 00:18:08.733 "pending_bdev_io": 0, 00:18:08.733 "completed_nvme_io": 0, 00:18:08.733 "transports": [] 00:18:08.733 } 00:18:08.733 ] 00:18:08.733 }' 00:18:08.733 16:17:07 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:08.733 16:17:07 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:08.733 16:17:07 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:08.733 16:17:07 -- target/rpc.sh@15 -- # wc -l 00:18:08.733 16:17:07 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:08.733 16:17:07 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:08.993 16:17:07 -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:08.993 16:17:07 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.993 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.993 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.993 [2024-04-23 16:17:07.700873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.993 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.993 16:17:07 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:08.993 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.993 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.993 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.993 16:17:07 -- target/rpc.sh@33 -- # stats='{ 00:18:08.993 "tick_rate": 1900000000, 00:18:08.993 "poll_groups": [ 00:18:08.993 { 00:18:08.993 "name": "nvmf_tgt_poll_group_0", 00:18:08.993 "admin_qpairs": 0, 00:18:08.993 "io_qpairs": 0, 00:18:08.993 "current_admin_qpairs": 0, 00:18:08.993 "current_io_qpairs": 0, 00:18:08.993 "pending_bdev_io": 0, 00:18:08.993 "completed_nvme_io": 0, 00:18:08.993 "transports": [ 00:18:08.993 { 00:18:08.993 "trtype": "TCP" 00:18:08.993 } 00:18:08.993 ] 00:18:08.993 }, 00:18:08.993 { 00:18:08.993 "name": "nvmf_tgt_poll_group_1", 00:18:08.993 "admin_qpairs": 0, 00:18:08.993 "io_qpairs": 0, 00:18:08.993 "current_admin_qpairs": 0, 00:18:08.993 "current_io_qpairs": 0, 00:18:08.993 "pending_bdev_io": 0, 00:18:08.993 "completed_nvme_io": 0, 00:18:08.993 "transports": [ 00:18:08.993 { 00:18:08.993 "trtype": "TCP" 00:18:08.993 } 00:18:08.993 ] 00:18:08.993 }, 00:18:08.993 { 00:18:08.993 "name": "nvmf_tgt_poll_group_2", 00:18:08.993 "admin_qpairs": 0, 00:18:08.993 "io_qpairs": 0, 00:18:08.993 "current_admin_qpairs": 0, 00:18:08.993 "current_io_qpairs": 0, 00:18:08.993 "pending_bdev_io": 0, 00:18:08.993 "completed_nvme_io": 0, 00:18:08.993 "transports": [ 00:18:08.993 { 00:18:08.993 "trtype": "TCP" 00:18:08.993 } 00:18:08.993 ] 00:18:08.993 }, 00:18:08.993 { 00:18:08.993 "name": "nvmf_tgt_poll_group_3", 00:18:08.993 "admin_qpairs": 0, 00:18:08.993 "io_qpairs": 0, 00:18:08.993 "current_admin_qpairs": 0, 00:18:08.993 "current_io_qpairs": 0, 00:18:08.993 "pending_bdev_io": 0, 00:18:08.993 "completed_nvme_io": 0, 00:18:08.993 "transports": [ 00:18:08.993 { 00:18:08.993 "trtype": "TCP" 00:18:08.993 } 00:18:08.993 ] 00:18:08.993 } 00:18:08.993 ] 00:18:08.993 }' 00:18:08.993 16:17:07 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:08.993 16:17:07 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:08.993 16:17:07 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:08.993 16:17:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:08.993 16:17:07 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:08.993 16:17:07 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:08.993 16:17:07 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:08.993 16:17:07 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:08.994 16:17:07 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 Malloc1 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 [2024-04-23 16:17:07.870891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:08.994 16:17:07 -- common/autotest_common.sh@640 -- # local es=0 00:18:08.994 16:17:07 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:08.994 16:17:07 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:08.994 16:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.994 16:17:07 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:08.994 16:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.994 16:17:07 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:08.994 16:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:08.994 16:17:07 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:08.994 16:17:07 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:08.994 16:17:07 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:18:08.994 [2024-04-23 16:17:07.900077] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:18:08.994 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:08.994 could not add new controller: failed to write to nvme-fabrics device 00:18:08.994 16:17:07 -- common/autotest_common.sh@643 -- # es=1 00:18:08.994 16:17:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:08.994 16:17:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:08.994 16:17:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:08.994 16:17:07 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:08.994 16:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.994 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:18:08.994 16:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.994 16:17:07 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.909 16:17:09 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:10.909 16:17:09 -- common/autotest_common.sh@1177 -- # local i=0 00:18:10.909 16:17:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.909 16:17:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:10.909 16:17:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:12.821 16:17:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:12.821 16:17:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:12.821 16:17:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:12.821 16:17:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:12.821 16:17:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.821 16:17:11 -- common/autotest_common.sh@1187 -- # return 0 00:18:12.821 16:17:11 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.821 16:17:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:12.821 16:17:11 -- common/autotest_common.sh@1198 -- # local i=0 00:18:12.821 16:17:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:12.821 16:17:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.821 16:17:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:12.821 16:17:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:12.821 16:17:11 -- common/autotest_common.sh@1210 -- # return 0 00:18:12.821 16:17:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:12.821 16:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.821 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:12.821 16:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.821 16:17:11 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.821 16:17:11 -- common/autotest_common.sh@640 -- # local es=0 00:18:12.821 16:17:11 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.821 16:17:11 -- common/autotest_common.sh@628 -- # local arg=nvme 00:18:12.821 16:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:12.821 16:17:11 -- common/autotest_common.sh@632 -- # type -t nvme 00:18:12.821 16:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:12.821 16:17:11 -- common/autotest_common.sh@634 -- # type -P nvme 00:18:12.821 16:17:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:12.821 16:17:11 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:18:12.821 16:17:11 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:18:12.821 16:17:11 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.821 [2024-04-23 16:17:11.612871] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:18:12.821 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:12.821 could not add new controller: failed to write to nvme-fabrics device 00:18:12.821 16:17:11 -- common/autotest_common.sh@643 -- # es=1 00:18:12.821 16:17:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:12.821 16:17:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:12.821 16:17:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:12.821 16:17:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:12.821 16:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.821 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:18:12.821 16:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.821 16:17:11 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:14.204 16:17:13 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:14.204 16:17:13 -- common/autotest_common.sh@1177 -- # local i=0 00:18:14.204 16:17:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.204 16:17:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:14.204 16:17:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:16.740 16:17:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:16.740 16:17:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:16.740 16:17:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.740 16:17:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:16.740 16:17:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.740 16:17:15 -- common/autotest_common.sh@1187 -- # return 0 00:18:16.740 16:17:15 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.740 16:17:15 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.740 16:17:15 -- common/autotest_common.sh@1198 -- # local i=0 00:18:16.740 16:17:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:16.740 16:17:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.740 16:17:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:16.740 16:17:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.740 16:17:15 -- common/autotest_common.sh@1210 -- # return 0 00:18:16.740 16:17:15 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.740 16:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.740 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.740 16:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.740 16:17:15 -- target/rpc.sh@81 -- # seq 1 5 00:18:16.740 16:17:15 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:16.740 16:17:15 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.740 16:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.740 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.740 16:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.740 16:17:15 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.740 16:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.740 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.740 [2024-04-23 16:17:15.318756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.740 16:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.740 16:17:15 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:16.740 16:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.740 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.740 16:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.740 16:17:15 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.740 16:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:16.740 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.740 16:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:16.740 16:17:15 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.119 16:17:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.119 16:17:16 -- common/autotest_common.sh@1177 -- # local i=0 00:18:18.119 16:17:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.119 16:17:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:18.119 16:17:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:20.029 16:17:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:20.029 16:17:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:20.029 16:17:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.029 16:17:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:20.029 16:17:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.029 16:17:18 -- common/autotest_common.sh@1187 -- # return 0 00:18:20.029 16:17:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.288 16:17:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.288 16:17:19 -- common/autotest_common.sh@1198 -- # local i=0 00:18:20.288 16:17:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:20.288 16:17:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.288 16:17:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:20.288 16:17:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.288 16:17:19 -- common/autotest_common.sh@1210 -- # return 0 00:18:20.288 16:17:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:20.288 16:17:19 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 [2024-04-23 16:17:19.057359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.288 16:17:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:20.288 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 16:17:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:20.288 16:17:19 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.666 16:17:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.666 16:17:20 -- common/autotest_common.sh@1177 -- # local i=0 00:18:21.666 16:17:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.666 16:17:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:21.666 16:17:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:24.202 16:17:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:24.202 16:17:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:24.202 16:17:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:24.202 16:17:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:24.202 16:17:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.202 16:17:22 -- common/autotest_common.sh@1187 -- # return 0 00:18:24.202 16:17:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.202 16:17:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.202 16:17:22 -- common/autotest_common.sh@1198 -- # local i=0 00:18:24.202 16:17:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:24.202 16:17:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.202 16:17:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:24.202 16:17:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.202 16:17:22 -- common/autotest_common.sh@1210 -- # return 0 00:18:24.202 16:17:22 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:24.202 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.202 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.202 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.202 16:17:22 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.202 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.202 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.202 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.202 16:17:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:24.203 16:17:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:24.203 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.203 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.203 16:17:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.203 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.203 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 [2024-04-23 16:17:22.858331] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.203 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.203 16:17:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:24.203 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.203 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.203 16:17:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:24.203 16:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.203 16:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 16:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.203 16:17:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.580 16:17:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:25.580 16:17:24 -- common/autotest_common.sh@1177 -- # local i=0 00:18:25.580 16:17:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.580 16:17:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:25.580 16:17:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:27.486 16:17:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:27.486 16:17:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:27.486 16:17:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.486 16:17:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:27.486 16:17:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.486 16:17:26 -- common/autotest_common.sh@1187 -- # return 0 00:18:27.486 16:17:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:27.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.744 16:17:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:27.744 16:17:26 -- common/autotest_common.sh@1198 -- # local i=0 00:18:27.744 16:17:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:27.744 16:17:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.744 16:17:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:27.744 16:17:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:27.744 16:17:26 -- common/autotest_common.sh@1210 -- # return 0 00:18:27.744 16:17:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:27.744 16:17:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 [2024-04-23 16:17:26.540209] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:27.744 16:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:27.744 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:18:27.744 16:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:27.744 16:17:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.124 16:17:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:29.124 16:17:27 -- common/autotest_common.sh@1177 -- # local i=0 00:18:29.124 16:17:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.124 16:17:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:29.124 16:17:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:31.661 16:17:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:31.661 16:17:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:31.661 16:17:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.661 16:17:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:31.661 16:17:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.661 16:17:29 -- common/autotest_common.sh@1187 -- # return 0 00:18:31.661 16:17:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.661 16:17:30 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:31.661 16:17:30 -- common/autotest_common.sh@1198 -- # local i=0 00:18:31.661 16:17:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:31.661 16:17:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.661 16:17:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:31.661 16:17:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.661 16:17:30 -- common/autotest_common.sh@1210 -- # return 0 00:18:31.661 16:17:30 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:31.661 16:17:30 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 [2024-04-23 16:17:30.221560] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:31.661 16:17:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:31.661 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:18:31.661 16:17:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:31.661 16:17:30 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:33.040 16:17:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:33.040 16:17:31 -- common/autotest_common.sh@1177 -- # local i=0 00:18:33.040 16:17:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.040 16:17:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:33.040 16:17:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:34.949 16:17:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:34.949 16:17:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:34.949 16:17:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:34.949 16:17:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:34.949 16:17:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.949 16:17:33 -- common/autotest_common.sh@1187 -- # return 0 00:18:34.949 16:17:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.949 16:17:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.949 16:17:33 -- common/autotest_common.sh@1198 -- # local i=0 00:18:34.949 16:17:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.949 16:17:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:34.949 16:17:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:34.949 16:17:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.949 16:17:33 -- common/autotest_common.sh@1210 -- # return 0 00:18:34.949 16:17:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.949 16:17:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.949 16:17:33 -- target/rpc.sh@99 -- # seq 1 5 00:18:34.949 16:17:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:34.949 16:17:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.949 16:17:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 [2024-04-23 16:17:33.866881] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.949 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.949 16:17:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:34.949 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.949 16:17:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:34.949 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.949 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:35.209 16:17:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 [2024-04-23 16:17:33.914851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:35.209 16:17:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 [2024-04-23 16:17:33.962907] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:33 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:35.209 16:17:33 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:35.209 16:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 [2024-04-23 16:17:34.010958] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:35.209 16:17:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 [2024-04-23 16:17:34.059066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.209 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.209 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.209 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.209 16:17:34 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:35.210 16:17:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.210 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:35.210 16:17:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.210 16:17:34 -- target/rpc.sh@110 -- # stats='{ 00:18:35.210 "tick_rate": 1900000000, 00:18:35.210 "poll_groups": [ 00:18:35.210 { 00:18:35.210 "name": "nvmf_tgt_poll_group_0", 00:18:35.210 "admin_qpairs": 0, 00:18:35.210 "io_qpairs": 224, 00:18:35.210 "current_admin_qpairs": 0, 00:18:35.210 "current_io_qpairs": 0, 00:18:35.210 "pending_bdev_io": 0, 00:18:35.210 "completed_nvme_io": 229, 00:18:35.210 "transports": [ 00:18:35.210 { 00:18:35.210 "trtype": "TCP" 00:18:35.210 } 00:18:35.210 ] 00:18:35.210 }, 00:18:35.210 { 00:18:35.210 "name": "nvmf_tgt_poll_group_1", 00:18:35.210 "admin_qpairs": 1, 00:18:35.210 "io_qpairs": 223, 00:18:35.210 "current_admin_qpairs": 0, 00:18:35.210 "current_io_qpairs": 0, 00:18:35.210 "pending_bdev_io": 0, 00:18:35.210 "completed_nvme_io": 225, 00:18:35.210 "transports": [ 00:18:35.210 { 00:18:35.210 "trtype": "TCP" 00:18:35.210 } 00:18:35.210 ] 00:18:35.210 }, 00:18:35.210 { 00:18:35.210 "name": "nvmf_tgt_poll_group_2", 00:18:35.210 "admin_qpairs": 6, 00:18:35.210 "io_qpairs": 218, 00:18:35.210 "current_admin_qpairs": 0, 00:18:35.210 "current_io_qpairs": 0, 00:18:35.210 "pending_bdev_io": 0, 00:18:35.210 "completed_nvme_io": 267, 00:18:35.210 "transports": [ 00:18:35.210 { 00:18:35.210 "trtype": "TCP" 00:18:35.210 } 00:18:35.210 ] 00:18:35.210 }, 00:18:35.210 { 00:18:35.210 "name": "nvmf_tgt_poll_group_3", 00:18:35.210 "admin_qpairs": 0, 00:18:35.210 "io_qpairs": 224, 00:18:35.210 "current_admin_qpairs": 0, 00:18:35.210 "current_io_qpairs": 0, 00:18:35.210 "pending_bdev_io": 0, 00:18:35.210 "completed_nvme_io": 518, 00:18:35.210 "transports": [ 00:18:35.210 { 00:18:35.210 "trtype": "TCP" 00:18:35.210 } 00:18:35.210 ] 00:18:35.210 } 00:18:35.210 ] 00:18:35.210 }' 00:18:35.210 16:17:34 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:35.210 16:17:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:35.210 16:17:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:35.210 16:17:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:35.470 16:17:34 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:35.470 16:17:34 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:35.470 16:17:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:35.470 16:17:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:35.470 16:17:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:35.470 16:17:34 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:35.470 16:17:34 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:35.470 16:17:34 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:35.470 16:17:34 -- target/rpc.sh@123 -- # nvmftestfini 00:18:35.470 16:17:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.470 16:17:34 -- nvmf/common.sh@116 -- # sync 00:18:35.470 16:17:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.470 16:17:34 -- nvmf/common.sh@119 -- # set +e 00:18:35.470 16:17:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.470 16:17:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.470 rmmod nvme_tcp 00:18:35.470 rmmod nvme_fabrics 00:18:35.470 rmmod nvme_keyring 00:18:35.470 16:17:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.470 16:17:34 -- nvmf/common.sh@123 -- # set -e 00:18:35.470 16:17:34 -- nvmf/common.sh@124 -- # return 0 00:18:35.470 16:17:34 -- nvmf/common.sh@477 -- # '[' -n 3053350 ']' 00:18:35.470 16:17:34 -- nvmf/common.sh@478 -- # killprocess 3053350 00:18:35.470 16:17:34 -- common/autotest_common.sh@926 -- # '[' -z 3053350 ']' 00:18:35.470 16:17:34 -- common/autotest_common.sh@930 -- # kill -0 3053350 00:18:35.470 16:17:34 -- common/autotest_common.sh@931 -- # uname 00:18:35.470 16:17:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.470 16:17:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3053350 00:18:35.470 16:17:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:35.470 16:17:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:35.470 16:17:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3053350' 00:18:35.470 killing process with pid 3053350 00:18:35.470 16:17:34 -- common/autotest_common.sh@945 -- # kill 3053350 00:18:35.470 16:17:34 -- common/autotest_common.sh@950 -- # wait 3053350 00:18:36.041 16:17:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:36.041 16:17:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:36.041 16:17:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:36.041 16:17:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.041 16:17:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:36.041 16:17:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.041 16:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.041 16:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.585 16:17:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:38.585 00:18:38.585 real 0m36.751s 00:18:38.585 user 1m51.978s 00:18:38.585 sys 0m6.319s 00:18:38.585 16:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.585 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.585 ************************************ 00:18:38.585 END TEST nvmf_rpc 00:18:38.585 ************************************ 00:18:38.585 16:17:36 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:38.585 16:17:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:38.585 16:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:38.585 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.585 ************************************ 00:18:38.585 START TEST nvmf_invalid 00:18:38.585 ************************************ 00:18:38.585 16:17:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:38.585 * Looking for test storage... 00:18:38.585 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:38.585 16:17:37 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.585 16:17:37 -- nvmf/common.sh@7 -- # uname -s 00:18:38.585 16:17:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.585 16:17:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.585 16:17:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.585 16:17:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.585 16:17:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.585 16:17:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.585 16:17:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.585 16:17:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.585 16:17:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.585 16:17:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.585 16:17:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:38.585 16:17:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:38.585 16:17:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.585 16:17:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.585 16:17:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:38.585 16:17:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:38.585 16:17:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.585 16:17:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.585 16:17:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.585 16:17:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 16:17:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 16:17:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 16:17:37 -- paths/export.sh@5 -- # export PATH 00:18:38.586 16:17:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.586 16:17:37 -- nvmf/common.sh@46 -- # : 0 00:18:38.586 16:17:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:38.586 16:17:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:38.586 16:17:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:38.586 16:17:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.586 16:17:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.586 16:17:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:38.586 16:17:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:38.586 16:17:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:38.586 16:17:37 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:38.586 16:17:37 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:38.586 16:17:37 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:38.586 16:17:37 -- target/invalid.sh@14 -- # target=foobar 00:18:38.586 16:17:37 -- target/invalid.sh@16 -- # RANDOM=0 00:18:38.586 16:17:37 -- target/invalid.sh@34 -- # nvmftestinit 00:18:38.586 16:17:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:38.586 16:17:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.586 16:17:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:38.586 16:17:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:38.586 16:17:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:38.586 16:17:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.586 16:17:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.586 16:17:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.586 16:17:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:38.586 16:17:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:38.586 16:17:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:38.586 16:17:37 -- common/autotest_common.sh@10 -- # set +x 00:18:43.871 16:17:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:43.871 16:17:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:43.871 16:17:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:43.871 16:17:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:43.871 16:17:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:43.871 16:17:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:43.871 16:17:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:43.871 16:17:42 -- nvmf/common.sh@294 -- # net_devs=() 00:18:43.871 16:17:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:43.871 16:17:42 -- nvmf/common.sh@295 -- # e810=() 00:18:43.871 16:17:42 -- nvmf/common.sh@295 -- # local -ga e810 00:18:43.871 16:17:42 -- nvmf/common.sh@296 -- # x722=() 00:18:43.871 16:17:42 -- nvmf/common.sh@296 -- # local -ga x722 00:18:43.871 16:17:42 -- nvmf/common.sh@297 -- # mlx=() 00:18:43.871 16:17:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:43.871 16:17:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.871 16:17:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:43.871 16:17:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:43.871 16:17:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.871 16:17:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:43.871 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:43.871 16:17:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:43.871 16:17:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:43.871 16:17:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:43.871 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:43.871 16:17:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:43.872 16:17:42 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.872 16:17:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.872 16:17:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.872 16:17:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.872 16:17:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:43.872 Found net devices under 0000:27:00.0: cvl_0_0 00:18:43.872 16:17:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.872 16:17:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:43.872 16:17:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.872 16:17:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:43.872 16:17:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.872 16:17:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:43.872 Found net devices under 0000:27:00.1: cvl_0_1 00:18:43.872 16:17:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.872 16:17:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:43.872 16:17:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:43.872 16:17:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:43.872 16:17:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.872 16:17:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.872 16:17:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.872 16:17:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:43.872 16:17:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.872 16:17:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.872 16:17:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:43.872 16:17:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.872 16:17:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.872 16:17:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:43.872 16:17:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:43.872 16:17:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.872 16:17:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.872 16:17:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.872 16:17:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.872 16:17:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:43.872 16:17:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.872 16:17:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.872 16:17:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.872 16:17:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:43.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:18:43.872 00:18:43.872 --- 10.0.0.2 ping statistics --- 00:18:43.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.872 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:18:43.872 16:17:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.474 ms 00:18:43.872 00:18:43.872 --- 10.0.0.1 ping statistics --- 00:18:43.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.872 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:18:43.872 16:17:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.872 16:17:42 -- nvmf/common.sh@410 -- # return 0 00:18:43.872 16:17:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:43.872 16:17:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.872 16:17:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:43.872 16:17:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.872 16:17:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:43.872 16:17:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:43.872 16:17:42 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:43.872 16:17:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:43.872 16:17:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:43.872 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.872 16:17:42 -- nvmf/common.sh@469 -- # nvmfpid=3062765 00:18:43.872 16:17:42 -- nvmf/common.sh@470 -- # waitforlisten 3062765 00:18:43.872 16:17:42 -- common/autotest_common.sh@819 -- # '[' -z 3062765 ']' 00:18:43.872 16:17:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.872 16:17:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.872 16:17:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.872 16:17:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.872 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.872 16:17:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:43.872 [2024-04-23 16:17:42.408874] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:43.872 [2024-04-23 16:17:42.408975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.872 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.872 [2024-04-23 16:17:42.530447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.872 [2024-04-23 16:17:42.626807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:43.872 [2024-04-23 16:17:42.626981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.872 [2024-04-23 16:17:42.626995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.872 [2024-04-23 16:17:42.627005] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.872 [2024-04-23 16:17:42.627082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.872 [2024-04-23 16:17:42.627187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.872 [2024-04-23 16:17:42.627284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.872 [2024-04-23 16:17:42.627296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.442 16:17:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.442 16:17:43 -- common/autotest_common.sh@852 -- # return 0 00:18:44.442 16:17:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.442 16:17:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:44.442 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:18:44.442 16:17:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.442 16:17:43 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:44.442 16:17:43 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15337 00:18:44.442 [2024-04-23 16:17:43.288344] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:44.442 16:17:43 -- target/invalid.sh@40 -- # out='request: 00:18:44.442 { 00:18:44.442 "nqn": "nqn.2016-06.io.spdk:cnode15337", 00:18:44.442 "tgt_name": "foobar", 00:18:44.442 "method": "nvmf_create_subsystem", 00:18:44.442 "req_id": 1 00:18:44.442 } 00:18:44.442 Got JSON-RPC error response 00:18:44.442 response: 00:18:44.442 { 00:18:44.442 "code": -32603, 00:18:44.442 "message": "Unable to find target foobar" 00:18:44.442 }' 00:18:44.442 16:17:43 -- target/invalid.sh@41 -- # [[ request: 00:18:44.442 { 00:18:44.442 "nqn": "nqn.2016-06.io.spdk:cnode15337", 00:18:44.442 "tgt_name": "foobar", 00:18:44.442 "method": "nvmf_create_subsystem", 00:18:44.442 "req_id": 1 00:18:44.442 } 00:18:44.442 Got JSON-RPC error response 00:18:44.442 response: 00:18:44.442 { 00:18:44.442 "code": -32603, 00:18:44.442 "message": "Unable to find target foobar" 00:18:44.442 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:44.442 16:17:43 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:44.442 16:17:43 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode767 00:18:44.700 [2024-04-23 16:17:43.436595] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode767: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:44.700 16:17:43 -- target/invalid.sh@45 -- # out='request: 00:18:44.700 { 00:18:44.700 "nqn": "nqn.2016-06.io.spdk:cnode767", 00:18:44.700 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:44.700 "method": "nvmf_create_subsystem", 00:18:44.700 "req_id": 1 00:18:44.700 } 00:18:44.700 Got JSON-RPC error response 00:18:44.700 response: 00:18:44.700 { 00:18:44.700 "code": -32602, 00:18:44.700 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:44.700 }' 00:18:44.700 16:17:43 -- target/invalid.sh@46 -- # [[ request: 00:18:44.700 { 00:18:44.700 "nqn": "nqn.2016-06.io.spdk:cnode767", 00:18:44.700 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:44.700 "method": "nvmf_create_subsystem", 00:18:44.700 "req_id": 1 00:18:44.700 } 00:18:44.700 Got JSON-RPC error response 00:18:44.700 response: 00:18:44.700 { 00:18:44.700 "code": -32602, 00:18:44.700 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:44.700 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:44.700 16:17:43 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:44.700 16:17:43 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1307 00:18:44.700 [2024-04-23 16:17:43.576756] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1307: invalid model number 'SPDK_Controller' 00:18:44.700 16:17:43 -- target/invalid.sh@50 -- # out='request: 00:18:44.700 { 00:18:44.700 "nqn": "nqn.2016-06.io.spdk:cnode1307", 00:18:44.700 "model_number": "SPDK_Controller\u001f", 00:18:44.700 "method": "nvmf_create_subsystem", 00:18:44.700 "req_id": 1 00:18:44.700 } 00:18:44.700 Got JSON-RPC error response 00:18:44.700 response: 00:18:44.700 { 00:18:44.700 "code": -32602, 00:18:44.700 "message": "Invalid MN SPDK_Controller\u001f" 00:18:44.700 }' 00:18:44.700 16:17:43 -- target/invalid.sh@51 -- # [[ request: 00:18:44.700 { 00:18:44.700 "nqn": "nqn.2016-06.io.spdk:cnode1307", 00:18:44.700 "model_number": "SPDK_Controller\u001f", 00:18:44.700 "method": "nvmf_create_subsystem", 00:18:44.700 "req_id": 1 00:18:44.700 } 00:18:44.700 Got JSON-RPC error response 00:18:44.700 response: 00:18:44.700 { 00:18:44.700 "code": -32602, 00:18:44.700 "message": "Invalid MN SPDK_Controller\u001f" 00:18:44.700 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:44.700 16:17:43 -- target/invalid.sh@54 -- # gen_random_s 21 00:18:44.700 16:17:43 -- target/invalid.sh@19 -- # local length=21 ll 00:18:44.700 16:17:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:44.700 16:17:43 -- target/invalid.sh@21 -- # local chars 00:18:44.700 16:17:43 -- target/invalid.sh@22 -- # local string 00:18:44.700 16:17:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:44.700 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 74 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # string+=J 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 47 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # string+=/ 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 94 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # string+='^' 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 94 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # string+='^' 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 68 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # string+=D 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.701 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.701 16:17:43 -- target/invalid.sh@25 -- # printf %x 64 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=@ 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 124 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+='|' 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 53 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=5 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 116 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=t 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 97 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=a 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 108 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=l 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 88 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=X 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 71 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=G 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 111 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=o 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 47 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=/ 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 96 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+='`' 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 95 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=_ 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 60 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+='<' 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 60 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+='<' 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 120 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=x 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # printf %x 87 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:44.959 16:17:43 -- target/invalid.sh@25 -- # string+=W 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:44.959 16:17:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:44.959 16:17:43 -- target/invalid.sh@28 -- # [[ J == \- ]] 00:18:44.959 16:17:43 -- target/invalid.sh@31 -- # echo 'J/^^D@|5talXGo/`_<,itrB0I'\''e.Lb/HU_' 00:18:45.221 16:17:44 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']zc:zoYR]TA% p,]96seib)>,itrB0I'\''e.Lb/HU_' nqn.2016-06.io.spdk:cnode12641 00:18:45.480 [2024-04-23 16:17:44.161485] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12641: invalid model number ']zc:zoYR]TA% p,]96seib)>,itrB0I'e.Lb/HU_' 00:18:45.480 16:17:44 -- target/invalid.sh@58 -- # out='request: 00:18:45.480 { 00:18:45.480 "nqn": "nqn.2016-06.io.spdk:cnode12641", 00:18:45.480 "model_number": "]zc:zoYR]TA% p,]96seib)>,itr\u007fB0I'\''e.Lb/HU_", 00:18:45.480 "method": "nvmf_create_subsystem", 00:18:45.480 "req_id": 1 00:18:45.480 } 00:18:45.480 Got JSON-RPC error response 00:18:45.480 response: 00:18:45.480 { 00:18:45.480 "code": -32602, 00:18:45.480 "message": "Invalid MN ]zc:zoYR]TA% p,]96seib)>,itr\u007fB0I'\''e.Lb/HU_" 00:18:45.480 }' 00:18:45.480 16:17:44 -- target/invalid.sh@59 -- # [[ request: 00:18:45.481 { 00:18:45.481 "nqn": "nqn.2016-06.io.spdk:cnode12641", 00:18:45.481 "model_number": "]zc:zoYR]TA% p,]96seib)>,itr\u007fB0I'e.Lb/HU_", 00:18:45.481 "method": "nvmf_create_subsystem", 00:18:45.481 "req_id": 1 00:18:45.481 } 00:18:45.481 Got JSON-RPC error response 00:18:45.481 response: 00:18:45.481 { 00:18:45.481 "code": -32602, 00:18:45.481 "message": "Invalid MN ]zc:zoYR]TA% p,]96seib)>,itr\u007fB0I'e.Lb/HU_" 00:18:45.481 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:45.481 16:17:44 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:45.481 [2024-04-23 16:17:44.313730] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.481 16:17:44 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:45.739 16:17:44 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:45.739 16:17:44 -- target/invalid.sh@67 -- # echo '' 00:18:45.739 16:17:44 -- target/invalid.sh@67 -- # head -n 1 00:18:45.739 16:17:44 -- target/invalid.sh@67 -- # IP= 00:18:45.739 16:17:44 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:45.739 [2024-04-23 16:17:44.662240] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:46.026 16:17:44 -- target/invalid.sh@69 -- # out='request: 00:18:46.026 { 00:18:46.026 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:46.026 "listen_address": { 00:18:46.026 "trtype": "tcp", 00:18:46.026 "traddr": "", 00:18:46.026 "trsvcid": "4421" 00:18:46.026 }, 00:18:46.026 "method": "nvmf_subsystem_remove_listener", 00:18:46.026 "req_id": 1 00:18:46.026 } 00:18:46.026 Got JSON-RPC error response 00:18:46.026 response: 00:18:46.026 { 00:18:46.026 "code": -32602, 00:18:46.026 "message": "Invalid parameters" 00:18:46.026 }' 00:18:46.026 16:17:44 -- target/invalid.sh@70 -- # [[ request: 00:18:46.026 { 00:18:46.026 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:46.026 "listen_address": { 00:18:46.026 "trtype": "tcp", 00:18:46.026 "traddr": "", 00:18:46.026 "trsvcid": "4421" 00:18:46.026 }, 00:18:46.026 "method": "nvmf_subsystem_remove_listener", 00:18:46.026 "req_id": 1 00:18:46.026 } 00:18:46.026 Got JSON-RPC error response 00:18:46.026 response: 00:18:46.026 { 00:18:46.026 "code": -32602, 00:18:46.026 "message": "Invalid parameters" 00:18:46.026 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:46.026 16:17:44 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13330 -i 0 00:18:46.026 [2024-04-23 16:17:44.814415] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13330: invalid cntlid range [0-65519] 00:18:46.026 16:17:44 -- target/invalid.sh@73 -- # out='request: 00:18:46.026 { 00:18:46.026 "nqn": "nqn.2016-06.io.spdk:cnode13330", 00:18:46.026 "min_cntlid": 0, 00:18:46.026 "method": "nvmf_create_subsystem", 00:18:46.026 "req_id": 1 00:18:46.026 } 00:18:46.026 Got JSON-RPC error response 00:18:46.026 response: 00:18:46.026 { 00:18:46.026 "code": -32602, 00:18:46.026 "message": "Invalid cntlid range [0-65519]" 00:18:46.026 }' 00:18:46.026 16:17:44 -- target/invalid.sh@74 -- # [[ request: 00:18:46.026 { 00:18:46.026 "nqn": "nqn.2016-06.io.spdk:cnode13330", 00:18:46.026 "min_cntlid": 0, 00:18:46.026 "method": "nvmf_create_subsystem", 00:18:46.026 "req_id": 1 00:18:46.026 } 00:18:46.026 Got JSON-RPC error response 00:18:46.026 response: 00:18:46.026 { 00:18:46.026 "code": -32602, 00:18:46.026 "message": "Invalid cntlid range [0-65519]" 00:18:46.026 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:46.026 16:17:44 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25666 -i 65520 00:18:46.327 [2024-04-23 16:17:44.970590] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25666: invalid cntlid range [65520-65519] 00:18:46.327 16:17:44 -- target/invalid.sh@75 -- # out='request: 00:18:46.327 { 00:18:46.327 "nqn": "nqn.2016-06.io.spdk:cnode25666", 00:18:46.327 "min_cntlid": 65520, 00:18:46.327 "method": "nvmf_create_subsystem", 00:18:46.327 "req_id": 1 00:18:46.327 } 00:18:46.327 Got JSON-RPC error response 00:18:46.327 response: 00:18:46.327 { 00:18:46.327 "code": -32602, 00:18:46.327 "message": "Invalid cntlid range [65520-65519]" 00:18:46.327 }' 00:18:46.327 16:17:44 -- target/invalid.sh@76 -- # [[ request: 00:18:46.327 { 00:18:46.327 "nqn": "nqn.2016-06.io.spdk:cnode25666", 00:18:46.327 "min_cntlid": 65520, 00:18:46.327 "method": "nvmf_create_subsystem", 00:18:46.327 "req_id": 1 00:18:46.327 } 00:18:46.327 Got JSON-RPC error response 00:18:46.327 response: 00:18:46.327 { 00:18:46.327 "code": -32602, 00:18:46.327 "message": "Invalid cntlid range [65520-65519]" 00:18:46.327 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:46.327 16:17:44 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28507 -I 0 00:18:46.327 [2024-04-23 16:17:45.122830] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28507: invalid cntlid range [1-0] 00:18:46.327 16:17:45 -- target/invalid.sh@77 -- # out='request: 00:18:46.327 { 00:18:46.327 "nqn": "nqn.2016-06.io.spdk:cnode28507", 00:18:46.327 "max_cntlid": 0, 00:18:46.327 "method": "nvmf_create_subsystem", 00:18:46.327 "req_id": 1 00:18:46.327 } 00:18:46.327 Got JSON-RPC error response 00:18:46.327 response: 00:18:46.327 { 00:18:46.327 "code": -32602, 00:18:46.327 "message": "Invalid cntlid range [1-0]" 00:18:46.327 }' 00:18:46.327 16:17:45 -- target/invalid.sh@78 -- # [[ request: 00:18:46.327 { 00:18:46.327 "nqn": "nqn.2016-06.io.spdk:cnode28507", 00:18:46.327 "max_cntlid": 0, 00:18:46.327 "method": "nvmf_create_subsystem", 00:18:46.327 "req_id": 1 00:18:46.327 } 00:18:46.327 Got JSON-RPC error response 00:18:46.327 response: 00:18:46.327 { 00:18:46.327 "code": -32602, 00:18:46.327 "message": "Invalid cntlid range [1-0]" 00:18:46.327 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:46.327 16:17:45 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20341 -I 65520 00:18:46.585 [2024-04-23 16:17:45.262988] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20341: invalid cntlid range [1-65520] 00:18:46.585 16:17:45 -- target/invalid.sh@79 -- # out='request: 00:18:46.585 { 00:18:46.585 "nqn": "nqn.2016-06.io.spdk:cnode20341", 00:18:46.585 "max_cntlid": 65520, 00:18:46.585 "method": "nvmf_create_subsystem", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "Invalid cntlid range [1-65520]" 00:18:46.585 }' 00:18:46.585 16:17:45 -- target/invalid.sh@80 -- # [[ request: 00:18:46.585 { 00:18:46.585 "nqn": "nqn.2016-06.io.spdk:cnode20341", 00:18:46.585 "max_cntlid": 65520, 00:18:46.585 "method": "nvmf_create_subsystem", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "Invalid cntlid range [1-65520]" 00:18:46.585 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:46.585 16:17:45 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21467 -i 6 -I 5 00:18:46.585 [2024-04-23 16:17:45.403239] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21467: invalid cntlid range [6-5] 00:18:46.585 16:17:45 -- target/invalid.sh@83 -- # out='request: 00:18:46.585 { 00:18:46.585 "nqn": "nqn.2016-06.io.spdk:cnode21467", 00:18:46.585 "min_cntlid": 6, 00:18:46.585 "max_cntlid": 5, 00:18:46.585 "method": "nvmf_create_subsystem", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "Invalid cntlid range [6-5]" 00:18:46.585 }' 00:18:46.585 16:17:45 -- target/invalid.sh@84 -- # [[ request: 00:18:46.585 { 00:18:46.585 "nqn": "nqn.2016-06.io.spdk:cnode21467", 00:18:46.585 "min_cntlid": 6, 00:18:46.585 "max_cntlid": 5, 00:18:46.585 "method": "nvmf_create_subsystem", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "Invalid cntlid range [6-5]" 00:18:46.585 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:46.585 16:17:45 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:46.585 16:17:45 -- target/invalid.sh@87 -- # out='request: 00:18:46.585 { 00:18:46.585 "name": "foobar", 00:18:46.585 "method": "nvmf_delete_target", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:46.585 }' 00:18:46.585 16:17:45 -- target/invalid.sh@88 -- # [[ request: 00:18:46.585 { 00:18:46.585 "name": "foobar", 00:18:46.585 "method": "nvmf_delete_target", 00:18:46.585 "req_id": 1 00:18:46.585 } 00:18:46.585 Got JSON-RPC error response 00:18:46.585 response: 00:18:46.585 { 00:18:46.585 "code": -32602, 00:18:46.585 "message": "The specified target doesn't exist, cannot delete it." 00:18:46.585 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:46.585 16:17:45 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:46.585 16:17:45 -- target/invalid.sh@91 -- # nvmftestfini 00:18:46.585 16:17:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:46.585 16:17:45 -- nvmf/common.sh@116 -- # sync 00:18:46.585 16:17:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:46.585 16:17:45 -- nvmf/common.sh@119 -- # set +e 00:18:46.585 16:17:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:46.585 16:17:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:46.585 rmmod nvme_tcp 00:18:46.844 rmmod nvme_fabrics 00:18:46.844 rmmod nvme_keyring 00:18:46.844 16:17:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:46.844 16:17:45 -- nvmf/common.sh@123 -- # set -e 00:18:46.844 16:17:45 -- nvmf/common.sh@124 -- # return 0 00:18:46.844 16:17:45 -- nvmf/common.sh@477 -- # '[' -n 3062765 ']' 00:18:46.844 16:17:45 -- nvmf/common.sh@478 -- # killprocess 3062765 00:18:46.844 16:17:45 -- common/autotest_common.sh@926 -- # '[' -z 3062765 ']' 00:18:46.844 16:17:45 -- common/autotest_common.sh@930 -- # kill -0 3062765 00:18:46.844 16:17:45 -- common/autotest_common.sh@931 -- # uname 00:18:46.844 16:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:46.844 16:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3062765 00:18:46.844 16:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:46.844 16:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:46.844 16:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3062765' 00:18:46.844 killing process with pid 3062765 00:18:46.844 16:17:45 -- common/autotest_common.sh@945 -- # kill 3062765 00:18:46.844 16:17:45 -- common/autotest_common.sh@950 -- # wait 3062765 00:18:47.414 16:17:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:47.414 16:17:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:47.414 16:17:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:47.414 16:17:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.414 16:17:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:47.414 16:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.414 16:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.414 16:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.328 16:17:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:49.328 00:18:49.328 real 0m11.165s 00:18:49.328 user 0m16.540s 00:18:49.328 sys 0m4.802s 00:18:49.328 16:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.328 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:49.328 ************************************ 00:18:49.328 END TEST nvmf_invalid 00:18:49.328 ************************************ 00:18:49.328 16:17:48 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:49.328 16:17:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:49.328 16:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:49.328 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:49.328 ************************************ 00:18:49.328 START TEST nvmf_abort 00:18:49.328 ************************************ 00:18:49.328 16:17:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:49.328 * Looking for test storage... 00:18:49.328 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:49.328 16:17:48 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.328 16:17:48 -- nvmf/common.sh@7 -- # uname -s 00:18:49.328 16:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.328 16:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.328 16:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.328 16:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.328 16:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.328 16:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.328 16:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.328 16:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.328 16:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.328 16:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.590 16:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:49.590 16:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:49.590 16:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.590 16:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.590 16:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:49.590 16:17:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:49.590 16:17:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.590 16:17:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.590 16:17:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.590 16:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.590 16:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.590 16:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.590 16:17:48 -- paths/export.sh@5 -- # export PATH 00:18:49.591 16:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.591 16:17:48 -- nvmf/common.sh@46 -- # : 0 00:18:49.591 16:17:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:49.591 16:17:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:49.591 16:17:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:49.591 16:17:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.591 16:17:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.591 16:17:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:49.591 16:17:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:49.591 16:17:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:49.591 16:17:48 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.591 16:17:48 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:49.591 16:17:48 -- target/abort.sh@14 -- # nvmftestinit 00:18:49.591 16:17:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:49.591 16:17:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.591 16:17:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:49.591 16:17:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:49.591 16:17:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:49.591 16:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.591 16:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.591 16:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.591 16:17:48 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:18:49.591 16:17:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:49.591 16:17:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:49.591 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:54.863 16:17:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:54.863 16:17:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:54.863 16:17:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:54.863 16:17:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:54.863 16:17:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:54.863 16:17:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:54.863 16:17:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:54.863 16:17:53 -- nvmf/common.sh@294 -- # net_devs=() 00:18:54.863 16:17:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:54.863 16:17:53 -- nvmf/common.sh@295 -- # e810=() 00:18:54.863 16:17:53 -- nvmf/common.sh@295 -- # local -ga e810 00:18:54.863 16:17:53 -- nvmf/common.sh@296 -- # x722=() 00:18:54.863 16:17:53 -- nvmf/common.sh@296 -- # local -ga x722 00:18:54.863 16:17:53 -- nvmf/common.sh@297 -- # mlx=() 00:18:54.863 16:17:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:54.863 16:17:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.863 16:17:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:54.863 16:17:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:54.863 16:17:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:54.863 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:54.863 16:17:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:54.863 16:17:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:54.863 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:54.863 16:17:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:54.863 16:17:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.863 16:17:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.863 16:17:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:54.863 Found net devices under 0000:27:00.0: cvl_0_0 00:18:54.863 16:17:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.863 16:17:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:54.863 16:17:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.863 16:17:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.863 16:17:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:54.863 Found net devices under 0000:27:00.1: cvl_0_1 00:18:54.863 16:17:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.863 16:17:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:54.863 16:17:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:54.863 16:17:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:54.863 16:17:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.863 16:17:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.863 16:17:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.863 16:17:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:54.863 16:17:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.863 16:17:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.863 16:17:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:54.863 16:17:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.863 16:17:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.863 16:17:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:54.863 16:17:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:54.863 16:17:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.863 16:17:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.863 16:17:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.863 16:17:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.863 16:17:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:54.863 16:17:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.863 16:17:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.863 16:17:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.123 16:17:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:55.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:18:55.123 00:18:55.123 --- 10.0.0.2 ping statistics --- 00:18:55.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.123 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:18:55.123 16:17:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:18:55.123 00:18:55.123 --- 10.0.0.1 ping statistics --- 00:18:55.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.123 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:18:55.123 16:17:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.123 16:17:53 -- nvmf/common.sh@410 -- # return 0 00:18:55.123 16:17:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:55.123 16:17:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.123 16:17:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:55.123 16:17:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:55.123 16:17:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.123 16:17:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:55.123 16:17:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:55.123 16:17:53 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:18:55.123 16:17:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:55.123 16:17:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:55.123 16:17:53 -- common/autotest_common.sh@10 -- # set +x 00:18:55.123 16:17:53 -- nvmf/common.sh@469 -- # nvmfpid=3067718 00:18:55.123 16:17:53 -- nvmf/common.sh@470 -- # waitforlisten 3067718 00:18:55.123 16:17:53 -- common/autotest_common.sh@819 -- # '[' -z 3067718 ']' 00:18:55.123 16:17:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.123 16:17:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.123 16:17:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.123 16:17:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.123 16:17:53 -- common/autotest_common.sh@10 -- # set +x 00:18:55.123 16:17:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:55.123 [2024-04-23 16:17:53.895072] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:18:55.123 [2024-04-23 16:17:53.895144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.123 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.123 [2024-04-23 16:17:53.986484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.381 [2024-04-23 16:17:54.083021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:55.381 [2024-04-23 16:17:54.083188] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.381 [2024-04-23 16:17:54.083201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.381 [2024-04-23 16:17:54.083210] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.381 [2024-04-23 16:17:54.083355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.381 [2024-04-23 16:17:54.083455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.381 [2024-04-23 16:17:54.083465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.951 16:17:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.951 16:17:54 -- common/autotest_common.sh@852 -- # return 0 00:18:55.951 16:17:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:55.951 16:17:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 16:17:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.951 16:17:54 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 [2024-04-23 16:17:54.649666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 Malloc0 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 Delay0 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 [2024-04-23 16:17:54.734797] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:55.951 16:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:55.951 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 16:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:55.951 16:17:54 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:18:55.951 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.211 [2024-04-23 16:17:54.887769] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:58.749 Initializing NVMe Controllers 00:18:58.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:58.749 controller IO queue size 128 less than required 00:18:58.749 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:18:58.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:58.749 Initialization complete. Launching workers. 00:18:58.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43127 00:18:58.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43188, failed to submit 62 00:18:58.749 success 43127, unsuccess 61, failed 0 00:18:58.749 16:17:57 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:58.749 16:17:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.749 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:18:58.749 16:17:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.749 16:17:57 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:58.749 16:17:57 -- target/abort.sh@38 -- # nvmftestfini 00:18:58.749 16:17:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:58.749 16:17:57 -- nvmf/common.sh@116 -- # sync 00:18:58.749 16:17:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:58.749 16:17:57 -- nvmf/common.sh@119 -- # set +e 00:18:58.749 16:17:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:58.749 16:17:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:58.749 rmmod nvme_tcp 00:18:58.749 rmmod nvme_fabrics 00:18:58.749 rmmod nvme_keyring 00:18:58.749 16:17:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:58.749 16:17:57 -- nvmf/common.sh@123 -- # set -e 00:18:58.749 16:17:57 -- nvmf/common.sh@124 -- # return 0 00:18:58.749 16:17:57 -- nvmf/common.sh@477 -- # '[' -n 3067718 ']' 00:18:58.749 16:17:57 -- nvmf/common.sh@478 -- # killprocess 3067718 00:18:58.749 16:17:57 -- common/autotest_common.sh@926 -- # '[' -z 3067718 ']' 00:18:58.749 16:17:57 -- common/autotest_common.sh@930 -- # kill -0 3067718 00:18:58.749 16:17:57 -- common/autotest_common.sh@931 -- # uname 00:18:58.749 16:17:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:58.749 16:17:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3067718 00:18:58.749 16:17:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:58.749 16:17:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:58.749 16:17:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3067718' 00:18:58.749 killing process with pid 3067718 00:18:58.749 16:17:57 -- common/autotest_common.sh@945 -- # kill 3067718 00:18:58.749 16:17:57 -- common/autotest_common.sh@950 -- # wait 3067718 00:18:59.008 16:17:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:59.008 16:17:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:59.008 16:17:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:59.008 16:17:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.008 16:17:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:59.008 16:17:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.008 16:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.008 16:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.915 16:17:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:00.915 00:19:00.915 real 0m11.603s 00:19:00.915 user 0m14.060s 00:19:00.915 sys 0m4.913s 00:19:00.915 16:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.915 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:19:00.915 ************************************ 00:19:00.915 END TEST nvmf_abort 00:19:00.915 ************************************ 00:19:00.915 16:17:59 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:00.915 16:17:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:00.915 16:17:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:00.915 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:19:00.915 ************************************ 00:19:00.915 START TEST nvmf_ns_hotplug_stress 00:19:00.915 ************************************ 00:19:00.915 16:17:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:01.176 * Looking for test storage... 00:19:01.176 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:01.176 16:17:59 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.176 16:17:59 -- nvmf/common.sh@7 -- # uname -s 00:19:01.176 16:17:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.176 16:17:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.176 16:17:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.176 16:17:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.176 16:17:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.176 16:17:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.176 16:17:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.176 16:17:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.176 16:17:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.176 16:17:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.176 16:17:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:01.176 16:17:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:01.176 16:17:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.176 16:17:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.176 16:17:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:01.176 16:17:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:01.176 16:17:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.176 16:17:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.176 16:17:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.176 16:17:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.176 16:17:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.176 16:17:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.176 16:17:59 -- paths/export.sh@5 -- # export PATH 00:19:01.176 16:17:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.176 16:17:59 -- nvmf/common.sh@46 -- # : 0 00:19:01.176 16:17:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:01.176 16:17:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:01.176 16:17:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:01.176 16:17:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.176 16:17:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.176 16:17:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:01.176 16:17:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:01.176 16:17:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:01.176 16:17:59 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:01.176 16:17:59 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:19:01.176 16:17:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:01.176 16:17:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.176 16:17:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:01.176 16:17:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:01.176 16:17:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:01.176 16:17:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.176 16:17:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.176 16:17:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.176 16:17:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:01.176 16:17:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:01.176 16:17:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:01.176 16:17:59 -- common/autotest_common.sh@10 -- # set +x 00:19:06.453 16:18:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:06.453 16:18:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:06.453 16:18:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:06.453 16:18:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:06.453 16:18:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:06.453 16:18:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:06.453 16:18:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:06.453 16:18:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:06.453 16:18:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:06.453 16:18:05 -- nvmf/common.sh@295 -- # e810=() 00:19:06.453 16:18:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:06.453 16:18:05 -- nvmf/common.sh@296 -- # x722=() 00:19:06.453 16:18:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:06.453 16:18:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:06.453 16:18:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:06.453 16:18:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.453 16:18:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:06.453 16:18:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:06.453 16:18:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:06.453 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:06.453 16:18:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:06.453 16:18:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:06.453 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:06.453 16:18:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:06.453 16:18:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.453 16:18:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.453 16:18:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:06.453 Found net devices under 0000:27:00.0: cvl_0_0 00:19:06.453 16:18:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.453 16:18:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:06.453 16:18:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.453 16:18:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.453 16:18:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:06.453 Found net devices under 0000:27:00.1: cvl_0_1 00:19:06.453 16:18:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.453 16:18:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:06.453 16:18:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:06.453 16:18:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.453 16:18:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.453 16:18:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.453 16:18:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:06.453 16:18:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.453 16:18:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.453 16:18:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:06.453 16:18:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.453 16:18:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.453 16:18:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:06.453 16:18:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:06.453 16:18:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.453 16:18:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.453 16:18:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.453 16:18:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.453 16:18:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:06.453 16:18:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.453 16:18:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.453 16:18:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.453 16:18:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:06.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:19:06.453 00:19:06.453 --- 10.0.0.2 ping statistics --- 00:19:06.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.453 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:19:06.453 16:18:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.639 ms 00:19:06.453 00:19:06.453 --- 10.0.0.1 ping statistics --- 00:19:06.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.453 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:19:06.453 16:18:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.453 16:18:05 -- nvmf/common.sh@410 -- # return 0 00:19:06.453 16:18:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:06.453 16:18:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.453 16:18:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:06.453 16:18:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.453 16:18:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:06.453 16:18:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:06.715 16:18:05 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:19:06.715 16:18:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:06.715 16:18:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:06.715 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:19:06.715 16:18:05 -- nvmf/common.sh@469 -- # nvmfpid=3072599 00:19:06.715 16:18:05 -- nvmf/common.sh@470 -- # waitforlisten 3072599 00:19:06.715 16:18:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:06.715 16:18:05 -- common/autotest_common.sh@819 -- # '[' -z 3072599 ']' 00:19:06.715 16:18:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.715 16:18:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:06.715 16:18:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.715 16:18:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:06.715 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:19:06.715 [2024-04-23 16:18:05.479455] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:06.715 [2024-04-23 16:18:05.479564] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.715 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.715 [2024-04-23 16:18:05.607095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.975 [2024-04-23 16:18:05.719258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:06.975 [2024-04-23 16:18:05.719454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.975 [2024-04-23 16:18:05.719471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.975 [2024-04-23 16:18:05.719482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.975 [2024-04-23 16:18:05.719547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.975 [2024-04-23 16:18:05.719577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.975 [2024-04-23 16:18:05.719586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.542 16:18:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:07.542 16:18:06 -- common/autotest_common.sh@852 -- # return 0 00:19:07.542 16:18:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:07.542 16:18:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:07.542 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:19:07.542 16:18:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.542 16:18:06 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:19:07.542 16:18:06 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:07.542 [2024-04-23 16:18:06.362045] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.542 16:18:06 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:07.799 16:18:06 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.799 [2024-04-23 16:18:06.643216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.799 16:18:06 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:08.058 16:18:06 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:08.058 Malloc0 00:19:08.058 16:18:06 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:08.317 Delay0 00:19:08.317 16:18:07 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:08.317 16:18:07 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:08.577 NULL1 00:19:08.577 16:18:07 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:08.837 16:18:07 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3073408 00:19:08.838 16:18:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:08.838 16:18:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.838 16:18:07 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:08.838 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.838 16:18:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.096 16:18:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:19:09.096 16:18:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:09.096 [2024-04-23 16:18:07.986333] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:19:09.096 true 00:19:09.096 16:18:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:09.096 16:18:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.354 16:18:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.354 16:18:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:19:09.354 16:18:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:09.612 true 00:19:09.612 16:18:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:09.612 16:18:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.612 16:18:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.872 16:18:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:19:09.872 16:18:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:09.872 true 00:19:09.872 16:18:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:09.872 16:18:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.132 16:18:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.392 16:18:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:19:10.392 16:18:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:10.392 true 00:19:10.392 16:18:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:10.392 16:18:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.651 16:18:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.651 16:18:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:19:10.651 16:18:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:10.910 true 00:19:10.910 16:18:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:10.910 16:18:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.910 16:18:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:11.168 16:18:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:19:11.168 16:18:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:11.168 true 00:19:11.168 16:18:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:11.168 16:18:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.426 16:18:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:11.686 16:18:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:19:11.686 16:18:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:11.686 true 00:19:11.686 16:18:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:11.686 16:18:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.945 16:18:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:11.945 16:18:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:19:11.945 16:18:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:12.204 true 00:19:12.204 16:18:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:12.204 16:18:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.204 16:18:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:12.461 16:18:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:19:12.461 16:18:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:12.461 true 00:19:12.461 16:18:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:12.461 16:18:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.718 16:18:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:12.718 16:18:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:19:12.718 16:18:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:12.976 true 00:19:12.976 16:18:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:12.976 16:18:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.976 16:18:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:13.235 16:18:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:19:13.235 16:18:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:13.494 true 00:19:13.494 16:18:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:13.494 16:18:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.494 16:18:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:13.753 16:18:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:19:13.753 16:18:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:13.753 true 00:19:13.753 16:18:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:13.753 16:18:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.011 16:18:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:14.011 16:18:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:19:14.011 16:18:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:14.269 true 00:19:14.269 16:18:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:14.269 16:18:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.269 16:18:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:14.528 16:18:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:19:14.528 16:18:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:14.528 true 00:19:14.786 16:18:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:14.786 16:18:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.786 16:18:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:15.046 16:18:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:19:15.046 16:18:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:15.046 true 00:19:15.046 16:18:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:15.046 16:18:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.304 16:18:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:15.304 16:18:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:19:15.304 16:18:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:15.564 true 00:19:15.564 16:18:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:15.564 16:18:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.564 16:18:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:15.822 16:18:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:19:15.822 16:18:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:15.822 true 00:19:15.822 16:18:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:15.822 16:18:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.082 16:18:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.082 16:18:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:19:16.082 16:18:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:16.341 true 00:19:16.341 16:18:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:16.341 16:18:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.341 16:18:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.599 16:18:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:19:16.599 16:18:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:16.599 true 00:19:16.857 16:18:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:16.857 16:18:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.857 16:18:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.115 16:18:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:19:17.115 16:18:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:17.115 true 00:19:17.115 16:18:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:17.115 16:18:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.374 16:18:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.374 16:18:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:19:17.374 16:18:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:17.633 true 00:19:17.633 16:18:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:17.633 16:18:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.633 16:18:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.892 16:18:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:19:17.892 16:18:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:17.892 true 00:19:17.892 16:18:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:17.892 16:18:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:18.151 16:18:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.151 16:18:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:19:18.151 16:18:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:18.409 true 00:19:18.409 16:18:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:18.409 16:18:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:18.409 16:18:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.668 16:18:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:19:18.668 16:18:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:18.668 true 00:19:18.668 16:18:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:18.668 16:18:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:18.928 16:18:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:19.187 16:18:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:19:19.187 16:18:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:19.187 true 00:19:19.187 16:18:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:19.187 16:18:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.446 16:18:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:19.446 16:18:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:19:19.446 16:18:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:19.704 true 00:19:19.704 16:18:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:19.704 16:18:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.704 16:18:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:19.962 16:18:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:19:19.962 16:18:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:19.962 true 00:19:19.962 16:18:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:19.962 16:18:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:20.221 16:18:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.221 16:18:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:19:20.221 16:18:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:20.482 true 00:19:20.482 16:18:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:20.482 16:18:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:20.482 16:18:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.742 16:18:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:19:20.742 16:18:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:20.742 true 00:19:21.003 16:18:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:21.003 16:18:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:21.003 16:18:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:21.264 16:18:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:19:21.264 16:18:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:21.264 true 00:19:21.264 16:18:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:21.265 16:18:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:21.523 16:18:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:21.523 16:18:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:19:21.523 16:18:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:19:21.780 true 00:19:21.780 16:18:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:21.780 16:18:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.039 16:18:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:22.039 16:18:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:19:22.039 16:18:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:19:22.298 true 00:19:22.298 16:18:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:22.298 16:18:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.298 16:18:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:22.558 16:18:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:19:22.558 16:18:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:19:22.558 true 00:19:22.558 16:18:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:22.558 16:18:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.819 16:18:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:23.077 16:18:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:19:23.077 16:18:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:19:23.077 true 00:19:23.077 16:18:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:23.077 16:18:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.335 16:18:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:23.335 16:18:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:19:23.335 16:18:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:19:23.594 true 00:19:23.594 16:18:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:23.594 16:18:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.594 16:18:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:23.852 16:18:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:19:23.852 16:18:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:19:23.852 true 00:19:23.852 16:18:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:23.852 16:18:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.110 16:18:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.110 16:18:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:19:24.110 16:18:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:19:24.368 true 00:19:24.368 16:18:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:24.368 16:18:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.368 16:18:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.627 16:18:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:19:24.627 16:18:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:19:24.627 true 00:19:24.886 16:18:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:24.886 16:18:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.886 16:18:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:25.230 16:18:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:19:25.230 16:18:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:19:25.230 true 00:19:25.230 16:18:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:25.230 16:18:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.537 16:18:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:25.537 16:18:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:19:25.537 16:18:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:19:25.537 true 00:19:25.537 16:18:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:25.537 16:18:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.796 16:18:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:25.796 16:18:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:19:25.796 16:18:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:19:26.054 true 00:19:26.054 16:18:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:26.054 16:18:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.054 16:18:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:26.314 16:18:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:19:26.314 16:18:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:19:26.314 true 00:19:26.314 16:18:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:26.314 16:18:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.574 16:18:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:26.833 16:18:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:19:26.833 16:18:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:19:26.833 true 00:19:26.833 16:18:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:26.833 16:18:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.833 16:18:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.092 16:18:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:19:27.092 16:18:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:19:27.092 true 00:19:27.092 16:18:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:27.092 16:18:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.351 16:18:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.610 16:18:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:19:27.610 16:18:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:19:27.610 true 00:19:27.610 16:18:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:27.610 16:18:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.867 16:18:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.867 16:18:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:19:27.867 16:18:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:19:28.125 true 00:19:28.125 16:18:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:28.125 16:18:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.125 16:18:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:28.383 16:18:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:19:28.383 16:18:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:19:28.383 true 00:19:28.383 16:18:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:28.383 16:18:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.641 16:18:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:28.641 16:18:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:19:28.641 16:18:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:19:28.899 true 00:19:28.899 16:18:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:28.899 16:18:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.899 16:18:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.158 16:18:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:19:29.158 16:18:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:19:29.158 true 00:19:29.158 16:18:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:29.158 16:18:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.418 16:18:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.677 16:18:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:19:29.677 16:18:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:19:29.677 true 00:19:29.677 16:18:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:29.677 16:18:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.935 16:18:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.935 16:18:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:19:29.936 16:18:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:19:30.194 true 00:19:30.194 16:18:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:30.194 16:18:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.194 16:18:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.452 16:18:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:19:30.452 16:18:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:19:30.452 true 00:19:30.452 16:18:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:30.452 16:18:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.711 16:18:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.711 16:18:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:19:30.711 16:18:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:19:30.970 true 00:19:30.970 16:18:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:30.970 16:18:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:31.229 16:18:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:31.229 16:18:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:19:31.229 16:18:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:19:31.488 true 00:19:31.488 16:18:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:31.488 16:18:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:31.488 16:18:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:31.747 16:18:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:19:31.747 16:18:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:19:31.747 true 00:19:31.747 16:18:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:31.747 16:18:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.005 16:18:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.005 16:18:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:19:32.005 16:18:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:19:32.264 true 00:19:32.264 16:18:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:32.264 16:18:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.264 16:18:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.521 16:18:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:19:32.521 16:18:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:19:32.521 true 00:19:32.521 16:18:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:32.521 16:18:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.779 16:18:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.779 16:18:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:19:32.779 16:18:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:19:33.038 true 00:19:33.038 16:18:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:33.038 16:18:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.038 16:18:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.296 16:18:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:19:33.296 16:18:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:19:33.296 true 00:19:33.555 16:18:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:33.555 16:18:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.555 16:18:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.815 16:18:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:19:33.815 16:18:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:19:33.815 true 00:19:33.815 16:18:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:33.815 16:18:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.073 16:18:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.073 16:18:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:19:34.073 16:18:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:19:34.332 true 00:19:34.332 16:18:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:34.332 16:18:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.332 16:18:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.590 16:18:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:19:34.590 16:18:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:19:34.590 true 00:19:34.590 16:18:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:34.590 16:18:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.848 16:18:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.848 16:18:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:19:34.848 16:18:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:19:35.106 true 00:19:35.106 16:18:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:35.106 16:18:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.364 16:18:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.364 16:18:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:19:35.364 16:18:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:19:35.623 true 00:19:35.623 16:18:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:35.623 16:18:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.623 16:18:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.882 16:18:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:19:35.882 16:18:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:19:35.882 true 00:19:35.882 16:18:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:35.882 16:18:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.140 16:18:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.140 16:18:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1066 00:19:36.140 16:18:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:19:36.398 true 00:19:36.398 16:18:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:36.398 16:18:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.398 16:18:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.657 16:18:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1067 00:19:36.657 16:18:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:19:36.918 true 00:19:36.918 16:18:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:36.918 16:18:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.918 16:18:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:37.179 16:18:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1068 00:19:37.179 16:18:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1068 00:19:37.179 true 00:19:37.179 16:18:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:37.179 16:18:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.440 16:18:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:37.440 16:18:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1069 00:19:37.440 16:18:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1069 00:19:37.699 true 00:19:37.699 16:18:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:37.699 16:18:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.699 16:18:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:37.956 16:18:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1070 00:19:37.956 16:18:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1070 00:19:38.215 true 00:19:38.215 16:18:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:38.215 16:18:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.215 16:18:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.475 16:18:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1071 00:19:38.475 16:18:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1071 00:19:38.475 true 00:19:38.475 16:18:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:38.475 16:18:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.737 16:18:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.737 16:18:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1072 00:19:38.737 16:18:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1072 00:19:38.998 Initializing NVMe Controllers 00:19:38.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.998 Controller IO queue size 128, less than required. 00:19:38.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:38.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:38.998 Initialization complete. Launching workers. 00:19:38.998 ======================================================== 00:19:38.998 Latency(us) 00:19:38.998 Device Information : IOPS MiB/s Average min max 00:19:38.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30833.83 15.06 4151.15 1883.95 8593.15 00:19:38.998 ======================================================== 00:19:38.998 Total : 30833.83 15.06 4151.15 1883.95 8593.15 00:19:38.998 00:19:38.998 true 00:19:38.998 16:18:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3073408 00:19:38.998 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3073408) - No such process 00:19:38.998 16:18:37 -- target/ns_hotplug_stress.sh@44 -- # wait 3073408 00:19:38.998 16:18:37 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:38.998 16:18:37 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:19:38.998 16:18:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.998 16:18:37 -- nvmf/common.sh@116 -- # sync 00:19:38.998 16:18:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.998 16:18:37 -- nvmf/common.sh@119 -- # set +e 00:19:38.998 16:18:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.998 16:18:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.998 rmmod nvme_tcp 00:19:38.998 rmmod nvme_fabrics 00:19:38.998 rmmod nvme_keyring 00:19:38.998 16:18:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.998 16:18:37 -- nvmf/common.sh@123 -- # set -e 00:19:38.998 16:18:37 -- nvmf/common.sh@124 -- # return 0 00:19:38.998 16:18:37 -- nvmf/common.sh@477 -- # '[' -n 3072599 ']' 00:19:38.998 16:18:37 -- nvmf/common.sh@478 -- # killprocess 3072599 00:19:38.998 16:18:37 -- common/autotest_common.sh@926 -- # '[' -z 3072599 ']' 00:19:38.998 16:18:37 -- common/autotest_common.sh@930 -- # kill -0 3072599 00:19:38.998 16:18:37 -- common/autotest_common.sh@931 -- # uname 00:19:38.998 16:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.998 16:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3072599 00:19:38.998 16:18:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:38.998 16:18:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:38.998 16:18:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3072599' 00:19:38.998 killing process with pid 3072599 00:19:38.998 16:18:37 -- common/autotest_common.sh@945 -- # kill 3072599 00:19:38.998 16:18:37 -- common/autotest_common.sh@950 -- # wait 3072599 00:19:39.567 16:18:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.567 16:18:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.567 16:18:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.567 16:18:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.567 16:18:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.567 16:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.567 16:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.567 16:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.106 16:18:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:42.106 00:19:42.106 real 0m40.641s 00:19:42.106 user 2m32.421s 00:19:42.106 sys 0m11.564s 00:19:42.106 16:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.106 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:42.106 ************************************ 00:19:42.106 END TEST nvmf_ns_hotplug_stress 00:19:42.106 ************************************ 00:19:42.106 16:18:40 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:42.106 16:18:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:42.106 16:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.106 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:42.106 ************************************ 00:19:42.106 START TEST nvmf_connect_stress 00:19:42.106 ************************************ 00:19:42.106 16:18:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:42.106 * Looking for test storage... 00:19:42.106 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:42.106 16:18:40 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.106 16:18:40 -- nvmf/common.sh@7 -- # uname -s 00:19:42.106 16:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.106 16:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.106 16:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.106 16:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.106 16:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.106 16:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.106 16:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.106 16:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.106 16:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.106 16:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.106 16:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:42.106 16:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:42.106 16:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.106 16:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.107 16:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:42.107 16:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:42.107 16:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.107 16:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.107 16:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.107 16:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.107 16:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.107 16:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.107 16:18:40 -- paths/export.sh@5 -- # export PATH 00:19:42.107 16:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.107 16:18:40 -- nvmf/common.sh@46 -- # : 0 00:19:42.107 16:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.107 16:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.107 16:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.107 16:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.107 16:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.107 16:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.107 16:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.107 16:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.107 16:18:40 -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:42.107 16:18:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:42.107 16:18:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.107 16:18:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.107 16:18:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.107 16:18:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.107 16:18:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.107 16:18:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.107 16:18:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.107 16:18:40 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:19:42.107 16:18:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.107 16:18:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.107 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:47.387 16:18:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:47.387 16:18:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:47.387 16:18:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:47.387 16:18:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:47.387 16:18:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:47.387 16:18:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:47.387 16:18:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:47.387 16:18:45 -- nvmf/common.sh@294 -- # net_devs=() 00:19:47.387 16:18:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:47.387 16:18:45 -- nvmf/common.sh@295 -- # e810=() 00:19:47.387 16:18:45 -- nvmf/common.sh@295 -- # local -ga e810 00:19:47.387 16:18:45 -- nvmf/common.sh@296 -- # x722=() 00:19:47.387 16:18:45 -- nvmf/common.sh@296 -- # local -ga x722 00:19:47.387 16:18:45 -- nvmf/common.sh@297 -- # mlx=() 00:19:47.387 16:18:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:47.387 16:18:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.387 16:18:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:47.387 16:18:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:47.387 16:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:47.387 16:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:47.387 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:47.387 16:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:47.387 16:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:47.387 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:47.387 16:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:47.387 16:18:45 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:19:47.387 16:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:47.387 16:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.387 16:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:47.387 16:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.387 16:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:47.387 Found net devices under 0000:27:00.0: cvl_0_0 00:19:47.388 16:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.388 16:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:47.388 16:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.388 16:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:47.388 16:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.388 16:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:47.388 Found net devices under 0000:27:00.1: cvl_0_1 00:19:47.388 16:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.388 16:18:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:47.388 16:18:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:47.388 16:18:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:47.388 16:18:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:47.388 16:18:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:47.388 16:18:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.388 16:18:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.388 16:18:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.388 16:18:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:47.388 16:18:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.388 16:18:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.388 16:18:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:47.388 16:18:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.388 16:18:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.388 16:18:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:47.388 16:18:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:47.388 16:18:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.388 16:18:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.388 16:18:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.388 16:18:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.388 16:18:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:47.388 16:18:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.388 16:18:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.388 16:18:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.388 16:18:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:47.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:19:47.388 00:19:47.388 --- 10.0.0.2 ping statistics --- 00:19:47.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.388 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:19:47.388 16:18:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:19:47.388 00:19:47.388 --- 10.0.0.1 ping statistics --- 00:19:47.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.388 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:19:47.388 16:18:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.388 16:18:46 -- nvmf/common.sh@410 -- # return 0 00:19:47.388 16:18:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:47.388 16:18:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.388 16:18:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:47.388 16:18:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:47.388 16:18:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.388 16:18:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:47.388 16:18:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.388 16:18:46 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:47.388 16:18:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:47.388 16:18:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:47.388 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.388 16:18:46 -- nvmf/common.sh@469 -- # nvmfpid=3083635 00:19:47.388 16:18:46 -- nvmf/common.sh@470 -- # waitforlisten 3083635 00:19:47.388 16:18:46 -- common/autotest_common.sh@819 -- # '[' -z 3083635 ']' 00:19:47.388 16:18:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.388 16:18:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:47.388 16:18:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.388 16:18:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:47.388 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.388 16:18:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:47.388 [2024-04-23 16:18:46.203092] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:19:47.388 [2024-04-23 16:18:46.203194] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.388 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.649 [2024-04-23 16:18:46.323534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.649 [2024-04-23 16:18:46.419999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:47.649 [2024-04-23 16:18:46.420170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.649 [2024-04-23 16:18:46.420184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.649 [2024-04-23 16:18:46.420193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.649 [2024-04-23 16:18:46.420326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.649 [2024-04-23 16:18:46.420428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.649 [2024-04-23 16:18:46.420437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.221 16:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.221 16:18:46 -- common/autotest_common.sh@852 -- # return 0 00:19:48.221 16:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:48.221 16:18:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:48.221 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:48.221 16:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.221 16:18:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.221 16:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.221 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:48.221 [2024-04-23 16:18:46.959420] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.221 16:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.221 16:18:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:48.221 16:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.221 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:48.221 16:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.221 16:18:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.221 16:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.221 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:48.221 [2024-04-23 16:18:46.998269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.221 16:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.221 16:18:47 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:48.221 16:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.221 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:19:48.221 NULL1 00:19:48.221 16:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.221 16:18:47 -- target/connect_stress.sh@21 -- # PERF_PID=3083801 00:19:48.221 16:18:47 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:48.221 16:18:47 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:48.221 16:18:47 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # seq 1 20 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:48.221 16:18:47 -- target/connect_stress.sh@28 -- # cat 00:19:48.221 16:18:47 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:48.221 16:18:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.221 16:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.221 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:19:48.792 16:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:48.792 16:18:47 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:48.792 16:18:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:48.792 16:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:48.792 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.050 16:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.050 16:18:47 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:49.050 16:18:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.050 16:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.050 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.308 16:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.308 16:18:48 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:49.308 16:18:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.308 16:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.308 16:18:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.568 16:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.568 16:18:48 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:49.568 16:18:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.568 16:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.568 16:18:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.849 16:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.849 16:18:48 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:49.849 16:18:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:49.849 16:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.849 16:18:48 -- common/autotest_common.sh@10 -- # set +x 00:19:50.111 16:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.111 16:18:49 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:50.111 16:18:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.111 16:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.111 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:19:50.679 16:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.679 16:18:49 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:50.679 16:18:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.679 16:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.679 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 16:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.937 16:18:49 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:50.937 16:18:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:50.937 16:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.937 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:19:51.196 16:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.196 16:18:49 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:51.196 16:18:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.196 16:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.196 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:19:51.457 16:18:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.457 16:18:50 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:51.457 16:18:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.457 16:18:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.457 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:19:51.717 16:18:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.717 16:18:50 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:51.717 16:18:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:51.717 16:18:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.717 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:19:52.283 16:18:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.283 16:18:50 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:52.284 16:18:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.284 16:18:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.284 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:19:52.541 16:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.541 16:18:51 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:52.541 16:18:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.541 16:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.541 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:19:52.800 16:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.800 16:18:51 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:52.800 16:18:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:52.800 16:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.800 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:19:53.061 16:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.061 16:18:51 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:53.061 16:18:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.061 16:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.061 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:19:53.321 16:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.321 16:18:52 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:53.321 16:18:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.321 16:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.321 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:19:53.892 16:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.892 16:18:52 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:53.892 16:18:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:53.892 16:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.892 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:19:54.152 16:18:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.152 16:18:52 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:54.152 16:18:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.152 16:18:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.152 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:19:54.411 16:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.411 16:18:53 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:54.411 16:18:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.411 16:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.411 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:19:54.669 16:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.669 16:18:53 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:54.669 16:18:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.669 16:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.669 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:19:54.928 16:18:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:54.928 16:18:53 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:54.928 16:18:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:54.928 16:18:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:54.928 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:19:55.513 16:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.513 16:18:54 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:55.513 16:18:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.513 16:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.513 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:19:55.772 16:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.772 16:18:54 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:55.772 16:18:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:55.772 16:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.772 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:19:56.031 16:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.031 16:18:54 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:56.031 16:18:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.031 16:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.031 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:19:56.290 16:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.290 16:18:55 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:56.290 16:18:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.290 16:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.290 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:19:56.549 16:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.549 16:18:55 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:56.549 16:18:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:56.549 16:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.549 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:19:57.122 16:18:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.122 16:18:55 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:57.122 16:18:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.122 16:18:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.122 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:19:57.383 16:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.383 16:18:56 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:57.383 16:18:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.383 16:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.383 16:18:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.643 16:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.643 16:18:56 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:57.643 16:18:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.643 16:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.643 16:18:56 -- common/autotest_common.sh@10 -- # set +x 00:19:57.901 16:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.901 16:18:56 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:57.901 16:18:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:57.901 16:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.901 16:18:56 -- common/autotest_common.sh@10 -- # set +x 00:19:58.159 16:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.160 16:18:57 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:58.160 16:18:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:58.160 16:18:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:58.160 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:19:58.419 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.680 16:18:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:58.680 16:18:57 -- target/connect_stress.sh@34 -- # kill -0 3083801 00:19:58.680 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3083801) - No such process 00:19:58.680 16:18:57 -- target/connect_stress.sh@38 -- # wait 3083801 00:19:58.680 16:18:57 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:58.680 16:18:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:58.680 16:18:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:58.680 16:18:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.680 16:18:57 -- nvmf/common.sh@116 -- # sync 00:19:58.680 16:18:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.680 16:18:57 -- nvmf/common.sh@119 -- # set +e 00:19:58.680 16:18:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.680 16:18:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.680 rmmod nvme_tcp 00:19:58.680 rmmod nvme_fabrics 00:19:58.680 rmmod nvme_keyring 00:19:58.680 16:18:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:58.680 16:18:57 -- nvmf/common.sh@123 -- # set -e 00:19:58.680 16:18:57 -- nvmf/common.sh@124 -- # return 0 00:19:58.680 16:18:57 -- nvmf/common.sh@477 -- # '[' -n 3083635 ']' 00:19:58.680 16:18:57 -- nvmf/common.sh@478 -- # killprocess 3083635 00:19:58.680 16:18:57 -- common/autotest_common.sh@926 -- # '[' -z 3083635 ']' 00:19:58.680 16:18:57 -- common/autotest_common.sh@930 -- # kill -0 3083635 00:19:58.680 16:18:57 -- common/autotest_common.sh@931 -- # uname 00:19:58.680 16:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:58.680 16:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3083635 00:19:58.680 16:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:58.680 16:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:58.680 16:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3083635' 00:19:58.680 killing process with pid 3083635 00:19:58.680 16:18:57 -- common/autotest_common.sh@945 -- # kill 3083635 00:19:58.680 16:18:57 -- common/autotest_common.sh@950 -- # wait 3083635 00:19:59.253 16:18:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.253 16:18:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:59.253 16:18:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:59.253 16:18:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.253 16:18:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:59.253 16:18:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.253 16:18:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.253 16:18:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.162 16:19:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:01.162 00:20:01.162 real 0m19.524s 00:20:01.162 user 0m42.012s 00:20:01.163 sys 0m7.539s 00:20:01.163 16:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.163 16:19:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 ************************************ 00:20:01.163 END TEST nvmf_connect_stress 00:20:01.163 ************************************ 00:20:01.163 16:19:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:01.163 16:19:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:01.163 16:19:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:01.163 16:19:00 -- common/autotest_common.sh@10 -- # set +x 00:20:01.163 ************************************ 00:20:01.163 START TEST nvmf_fused_ordering 00:20:01.163 ************************************ 00:20:01.163 16:19:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:01.422 * Looking for test storage... 00:20:01.422 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:01.422 16:19:00 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.422 16:19:00 -- nvmf/common.sh@7 -- # uname -s 00:20:01.422 16:19:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.422 16:19:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.422 16:19:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.422 16:19:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.422 16:19:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.422 16:19:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.422 16:19:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.422 16:19:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.422 16:19:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.422 16:19:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.422 16:19:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:01.422 16:19:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:01.422 16:19:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.422 16:19:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.422 16:19:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:01.422 16:19:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:01.422 16:19:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.422 16:19:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.422 16:19:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.422 16:19:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.422 16:19:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.422 16:19:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.422 16:19:00 -- paths/export.sh@5 -- # export PATH 00:20:01.423 16:19:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.423 16:19:00 -- nvmf/common.sh@46 -- # : 0 00:20:01.423 16:19:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:01.423 16:19:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:01.423 16:19:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:01.423 16:19:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.423 16:19:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.423 16:19:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:01.423 16:19:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:01.423 16:19:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:01.423 16:19:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:01.423 16:19:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:01.423 16:19:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.423 16:19:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:01.423 16:19:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:01.423 16:19:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:01.423 16:19:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.423 16:19:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.423 16:19:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.423 16:19:00 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:01.423 16:19:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:01.423 16:19:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:01.423 16:19:00 -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 16:19:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:06.816 16:19:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:06.816 16:19:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:06.816 16:19:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:06.816 16:19:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:06.816 16:19:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:06.816 16:19:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:06.816 16:19:05 -- nvmf/common.sh@294 -- # net_devs=() 00:20:06.816 16:19:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:06.816 16:19:05 -- nvmf/common.sh@295 -- # e810=() 00:20:06.816 16:19:05 -- nvmf/common.sh@295 -- # local -ga e810 00:20:06.816 16:19:05 -- nvmf/common.sh@296 -- # x722=() 00:20:06.816 16:19:05 -- nvmf/common.sh@296 -- # local -ga x722 00:20:06.816 16:19:05 -- nvmf/common.sh@297 -- # mlx=() 00:20:06.816 16:19:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:06.816 16:19:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.816 16:19:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:06.816 16:19:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.816 16:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:06.816 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:06.816 16:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:06.816 16:19:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:06.816 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:06.816 16:19:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.816 16:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.816 16:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.816 16:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:06.816 Found net devices under 0000:27:00.0: cvl_0_0 00:20:06.816 16:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.816 16:19:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:06.816 16:19:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.816 16:19:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.816 16:19:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:06.816 Found net devices under 0000:27:00.1: cvl_0_1 00:20:06.816 16:19:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.816 16:19:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:06.816 16:19:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:06.816 16:19:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.816 16:19:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.816 16:19:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.816 16:19:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:06.816 16:19:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.816 16:19:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.816 16:19:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:06.816 16:19:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.816 16:19:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.816 16:19:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:06.816 16:19:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:06.816 16:19:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.816 16:19:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.816 16:19:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.816 16:19:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.816 16:19:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:06.816 16:19:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.816 16:19:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.816 16:19:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.816 16:19:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:06.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:20:06.816 00:20:06.816 --- 10.0.0.2 ping statistics --- 00:20:06.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.816 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:20:06.816 16:19:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:20:06.816 00:20:06.816 --- 10.0.0.1 ping statistics --- 00:20:06.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.816 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:20:06.816 16:19:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.816 16:19:05 -- nvmf/common.sh@410 -- # return 0 00:20:06.816 16:19:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.816 16:19:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.816 16:19:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.816 16:19:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.816 16:19:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.816 16:19:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.816 16:19:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:06.816 16:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.816 16:19:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.816 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 16:19:05 -- nvmf/common.sh@469 -- # nvmfpid=3089809 00:20:06.816 16:19:05 -- nvmf/common.sh@470 -- # waitforlisten 3089809 00:20:06.816 16:19:05 -- common/autotest_common.sh@819 -- # '[' -z 3089809 ']' 00:20:06.816 16:19:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.816 16:19:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.816 16:19:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.816 16:19:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.816 16:19:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.816 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 [2024-04-23 16:19:05.560374] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:06.816 [2024-04-23 16:19:05.560450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.816 [2024-04-23 16:19:05.653920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.075 [2024-04-23 16:19:05.749324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:07.075 [2024-04-23 16:19:05.749488] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.075 [2024-04-23 16:19:05.749500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.075 [2024-04-23 16:19:05.749510] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.075 [2024-04-23 16:19:05.749534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.644 16:19:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.644 16:19:06 -- common/autotest_common.sh@852 -- # return 0 00:20:07.644 16:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:07.644 16:19:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 16:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.644 16:19:06 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 [2024-04-23 16:19:06.299520] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 [2024-04-23 16:19:06.315685] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 NULL1 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:07.644 16:19:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.644 16:19:06 -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 16:19:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.644 16:19:06 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:07.644 [2024-04-23 16:19:06.379805] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:07.645 [2024-04-23 16:19:06.379887] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090076 ] 00:20:07.645 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.218 Attached to nqn.2016-06.io.spdk:cnode1 00:20:08.218 Namespace ID: 1 size: 1GB 00:20:08.218 fused_ordering(0) 00:20:08.218 fused_ordering(1) 00:20:08.218 fused_ordering(2) 00:20:08.218 fused_ordering(3) 00:20:08.218 fused_ordering(4) 00:20:08.218 fused_ordering(5) 00:20:08.218 fused_ordering(6) 00:20:08.218 fused_ordering(7) 00:20:08.218 fused_ordering(8) 00:20:08.218 fused_ordering(9) 00:20:08.218 fused_ordering(10) 00:20:08.218 fused_ordering(11) 00:20:08.218 fused_ordering(12) 00:20:08.218 fused_ordering(13) 00:20:08.218 fused_ordering(14) 00:20:08.218 fused_ordering(15) 00:20:08.218 fused_ordering(16) 00:20:08.218 fused_ordering(17) 00:20:08.218 fused_ordering(18) 00:20:08.218 fused_ordering(19) 00:20:08.218 fused_ordering(20) 00:20:08.218 fused_ordering(21) 00:20:08.218 fused_ordering(22) 00:20:08.218 fused_ordering(23) 00:20:08.218 fused_ordering(24) 00:20:08.218 fused_ordering(25) 00:20:08.218 fused_ordering(26) 00:20:08.218 fused_ordering(27) 00:20:08.218 fused_ordering(28) 00:20:08.218 fused_ordering(29) 00:20:08.218 fused_ordering(30) 00:20:08.218 fused_ordering(31) 00:20:08.218 fused_ordering(32) 00:20:08.218 fused_ordering(33) 00:20:08.218 fused_ordering(34) 00:20:08.218 fused_ordering(35) 00:20:08.218 fused_ordering(36) 00:20:08.218 fused_ordering(37) 00:20:08.218 fused_ordering(38) 00:20:08.218 fused_ordering(39) 00:20:08.218 fused_ordering(40) 00:20:08.218 fused_ordering(41) 00:20:08.218 fused_ordering(42) 00:20:08.218 fused_ordering(43) 00:20:08.218 fused_ordering(44) 00:20:08.218 fused_ordering(45) 00:20:08.218 fused_ordering(46) 00:20:08.218 fused_ordering(47) 00:20:08.218 fused_ordering(48) 00:20:08.218 fused_ordering(49) 00:20:08.218 fused_ordering(50) 00:20:08.218 fused_ordering(51) 00:20:08.218 fused_ordering(52) 00:20:08.218 fused_ordering(53) 00:20:08.218 fused_ordering(54) 00:20:08.218 fused_ordering(55) 00:20:08.218 fused_ordering(56) 00:20:08.218 fused_ordering(57) 00:20:08.218 fused_ordering(58) 00:20:08.218 fused_ordering(59) 00:20:08.218 fused_ordering(60) 00:20:08.218 fused_ordering(61) 00:20:08.218 fused_ordering(62) 00:20:08.218 fused_ordering(63) 00:20:08.218 fused_ordering(64) 00:20:08.218 fused_ordering(65) 00:20:08.218 fused_ordering(66) 00:20:08.218 fused_ordering(67) 00:20:08.218 fused_ordering(68) 00:20:08.218 fused_ordering(69) 00:20:08.218 fused_ordering(70) 00:20:08.218 fused_ordering(71) 00:20:08.218 fused_ordering(72) 00:20:08.218 fused_ordering(73) 00:20:08.218 fused_ordering(74) 00:20:08.218 fused_ordering(75) 00:20:08.218 fused_ordering(76) 00:20:08.218 fused_ordering(77) 00:20:08.218 fused_ordering(78) 00:20:08.218 fused_ordering(79) 00:20:08.218 fused_ordering(80) 00:20:08.218 fused_ordering(81) 00:20:08.218 fused_ordering(82) 00:20:08.218 fused_ordering(83) 00:20:08.218 fused_ordering(84) 00:20:08.218 fused_ordering(85) 00:20:08.218 fused_ordering(86) 00:20:08.218 fused_ordering(87) 00:20:08.218 fused_ordering(88) 00:20:08.218 fused_ordering(89) 00:20:08.218 fused_ordering(90) 00:20:08.218 fused_ordering(91) 00:20:08.218 fused_ordering(92) 00:20:08.218 fused_ordering(93) 00:20:08.218 fused_ordering(94) 00:20:08.218 fused_ordering(95) 00:20:08.218 fused_ordering(96) 00:20:08.218 fused_ordering(97) 00:20:08.218 fused_ordering(98) 00:20:08.218 fused_ordering(99) 00:20:08.218 fused_ordering(100) 00:20:08.218 fused_ordering(101) 00:20:08.218 fused_ordering(102) 00:20:08.218 fused_ordering(103) 00:20:08.218 fused_ordering(104) 00:20:08.218 fused_ordering(105) 00:20:08.218 fused_ordering(106) 00:20:08.218 fused_ordering(107) 00:20:08.218 fused_ordering(108) 00:20:08.218 fused_ordering(109) 00:20:08.218 fused_ordering(110) 00:20:08.218 fused_ordering(111) 00:20:08.218 fused_ordering(112) 00:20:08.218 fused_ordering(113) 00:20:08.218 fused_ordering(114) 00:20:08.218 fused_ordering(115) 00:20:08.218 fused_ordering(116) 00:20:08.218 fused_ordering(117) 00:20:08.218 fused_ordering(118) 00:20:08.218 fused_ordering(119) 00:20:08.218 fused_ordering(120) 00:20:08.218 fused_ordering(121) 00:20:08.218 fused_ordering(122) 00:20:08.218 fused_ordering(123) 00:20:08.218 fused_ordering(124) 00:20:08.218 fused_ordering(125) 00:20:08.218 fused_ordering(126) 00:20:08.218 fused_ordering(127) 00:20:08.218 fused_ordering(128) 00:20:08.218 fused_ordering(129) 00:20:08.218 fused_ordering(130) 00:20:08.218 fused_ordering(131) 00:20:08.218 fused_ordering(132) 00:20:08.218 fused_ordering(133) 00:20:08.218 fused_ordering(134) 00:20:08.218 fused_ordering(135) 00:20:08.218 fused_ordering(136) 00:20:08.218 fused_ordering(137) 00:20:08.218 fused_ordering(138) 00:20:08.218 fused_ordering(139) 00:20:08.218 fused_ordering(140) 00:20:08.218 fused_ordering(141) 00:20:08.218 fused_ordering(142) 00:20:08.218 fused_ordering(143) 00:20:08.218 fused_ordering(144) 00:20:08.218 fused_ordering(145) 00:20:08.218 fused_ordering(146) 00:20:08.218 fused_ordering(147) 00:20:08.218 fused_ordering(148) 00:20:08.218 fused_ordering(149) 00:20:08.218 fused_ordering(150) 00:20:08.218 fused_ordering(151) 00:20:08.218 fused_ordering(152) 00:20:08.218 fused_ordering(153) 00:20:08.218 fused_ordering(154) 00:20:08.218 fused_ordering(155) 00:20:08.218 fused_ordering(156) 00:20:08.218 fused_ordering(157) 00:20:08.218 fused_ordering(158) 00:20:08.218 fused_ordering(159) 00:20:08.219 fused_ordering(160) 00:20:08.219 fused_ordering(161) 00:20:08.219 fused_ordering(162) 00:20:08.219 fused_ordering(163) 00:20:08.219 fused_ordering(164) 00:20:08.219 fused_ordering(165) 00:20:08.219 fused_ordering(166) 00:20:08.219 fused_ordering(167) 00:20:08.219 fused_ordering(168) 00:20:08.219 fused_ordering(169) 00:20:08.219 fused_ordering(170) 00:20:08.219 fused_ordering(171) 00:20:08.219 fused_ordering(172) 00:20:08.219 fused_ordering(173) 00:20:08.219 fused_ordering(174) 00:20:08.219 fused_ordering(175) 00:20:08.219 fused_ordering(176) 00:20:08.219 fused_ordering(177) 00:20:08.219 fused_ordering(178) 00:20:08.219 fused_ordering(179) 00:20:08.219 fused_ordering(180) 00:20:08.219 fused_ordering(181) 00:20:08.219 fused_ordering(182) 00:20:08.219 fused_ordering(183) 00:20:08.219 fused_ordering(184) 00:20:08.219 fused_ordering(185) 00:20:08.219 fused_ordering(186) 00:20:08.219 fused_ordering(187) 00:20:08.219 fused_ordering(188) 00:20:08.219 fused_ordering(189) 00:20:08.219 fused_ordering(190) 00:20:08.219 fused_ordering(191) 00:20:08.219 fused_ordering(192) 00:20:08.219 fused_ordering(193) 00:20:08.219 fused_ordering(194) 00:20:08.219 fused_ordering(195) 00:20:08.219 fused_ordering(196) 00:20:08.219 fused_ordering(197) 00:20:08.219 fused_ordering(198) 00:20:08.219 fused_ordering(199) 00:20:08.219 fused_ordering(200) 00:20:08.219 fused_ordering(201) 00:20:08.219 fused_ordering(202) 00:20:08.219 fused_ordering(203) 00:20:08.219 fused_ordering(204) 00:20:08.219 fused_ordering(205) 00:20:08.479 fused_ordering(206) 00:20:08.479 fused_ordering(207) 00:20:08.479 fused_ordering(208) 00:20:08.479 fused_ordering(209) 00:20:08.479 fused_ordering(210) 00:20:08.479 fused_ordering(211) 00:20:08.479 fused_ordering(212) 00:20:08.479 fused_ordering(213) 00:20:08.479 fused_ordering(214) 00:20:08.479 fused_ordering(215) 00:20:08.479 fused_ordering(216) 00:20:08.479 fused_ordering(217) 00:20:08.479 fused_ordering(218) 00:20:08.479 fused_ordering(219) 00:20:08.479 fused_ordering(220) 00:20:08.479 fused_ordering(221) 00:20:08.479 fused_ordering(222) 00:20:08.479 fused_ordering(223) 00:20:08.479 fused_ordering(224) 00:20:08.479 fused_ordering(225) 00:20:08.479 fused_ordering(226) 00:20:08.479 fused_ordering(227) 00:20:08.479 fused_ordering(228) 00:20:08.479 fused_ordering(229) 00:20:08.479 fused_ordering(230) 00:20:08.479 fused_ordering(231) 00:20:08.479 fused_ordering(232) 00:20:08.479 fused_ordering(233) 00:20:08.479 fused_ordering(234) 00:20:08.479 fused_ordering(235) 00:20:08.479 fused_ordering(236) 00:20:08.479 fused_ordering(237) 00:20:08.479 fused_ordering(238) 00:20:08.479 fused_ordering(239) 00:20:08.479 fused_ordering(240) 00:20:08.479 fused_ordering(241) 00:20:08.479 fused_ordering(242) 00:20:08.479 fused_ordering(243) 00:20:08.479 fused_ordering(244) 00:20:08.479 fused_ordering(245) 00:20:08.479 fused_ordering(246) 00:20:08.479 fused_ordering(247) 00:20:08.479 fused_ordering(248) 00:20:08.479 fused_ordering(249) 00:20:08.479 fused_ordering(250) 00:20:08.479 fused_ordering(251) 00:20:08.479 fused_ordering(252) 00:20:08.479 fused_ordering(253) 00:20:08.479 fused_ordering(254) 00:20:08.479 fused_ordering(255) 00:20:08.479 fused_ordering(256) 00:20:08.479 fused_ordering(257) 00:20:08.479 fused_ordering(258) 00:20:08.479 fused_ordering(259) 00:20:08.479 fused_ordering(260) 00:20:08.479 fused_ordering(261) 00:20:08.479 fused_ordering(262) 00:20:08.479 fused_ordering(263) 00:20:08.479 fused_ordering(264) 00:20:08.479 fused_ordering(265) 00:20:08.479 fused_ordering(266) 00:20:08.479 fused_ordering(267) 00:20:08.479 fused_ordering(268) 00:20:08.479 fused_ordering(269) 00:20:08.479 fused_ordering(270) 00:20:08.479 fused_ordering(271) 00:20:08.479 fused_ordering(272) 00:20:08.479 fused_ordering(273) 00:20:08.479 fused_ordering(274) 00:20:08.479 fused_ordering(275) 00:20:08.479 fused_ordering(276) 00:20:08.479 fused_ordering(277) 00:20:08.479 fused_ordering(278) 00:20:08.479 fused_ordering(279) 00:20:08.479 fused_ordering(280) 00:20:08.479 fused_ordering(281) 00:20:08.479 fused_ordering(282) 00:20:08.479 fused_ordering(283) 00:20:08.479 fused_ordering(284) 00:20:08.479 fused_ordering(285) 00:20:08.479 fused_ordering(286) 00:20:08.479 fused_ordering(287) 00:20:08.479 fused_ordering(288) 00:20:08.479 fused_ordering(289) 00:20:08.479 fused_ordering(290) 00:20:08.479 fused_ordering(291) 00:20:08.479 fused_ordering(292) 00:20:08.479 fused_ordering(293) 00:20:08.479 fused_ordering(294) 00:20:08.479 fused_ordering(295) 00:20:08.479 fused_ordering(296) 00:20:08.479 fused_ordering(297) 00:20:08.479 fused_ordering(298) 00:20:08.479 fused_ordering(299) 00:20:08.479 fused_ordering(300) 00:20:08.479 fused_ordering(301) 00:20:08.479 fused_ordering(302) 00:20:08.480 fused_ordering(303) 00:20:08.480 fused_ordering(304) 00:20:08.480 fused_ordering(305) 00:20:08.480 fused_ordering(306) 00:20:08.480 fused_ordering(307) 00:20:08.480 fused_ordering(308) 00:20:08.480 fused_ordering(309) 00:20:08.480 fused_ordering(310) 00:20:08.480 fused_ordering(311) 00:20:08.480 fused_ordering(312) 00:20:08.480 fused_ordering(313) 00:20:08.480 fused_ordering(314) 00:20:08.480 fused_ordering(315) 00:20:08.480 fused_ordering(316) 00:20:08.480 fused_ordering(317) 00:20:08.480 fused_ordering(318) 00:20:08.480 fused_ordering(319) 00:20:08.480 fused_ordering(320) 00:20:08.480 fused_ordering(321) 00:20:08.480 fused_ordering(322) 00:20:08.480 fused_ordering(323) 00:20:08.480 fused_ordering(324) 00:20:08.480 fused_ordering(325) 00:20:08.480 fused_ordering(326) 00:20:08.480 fused_ordering(327) 00:20:08.480 fused_ordering(328) 00:20:08.480 fused_ordering(329) 00:20:08.480 fused_ordering(330) 00:20:08.480 fused_ordering(331) 00:20:08.480 fused_ordering(332) 00:20:08.480 fused_ordering(333) 00:20:08.480 fused_ordering(334) 00:20:08.480 fused_ordering(335) 00:20:08.480 fused_ordering(336) 00:20:08.480 fused_ordering(337) 00:20:08.480 fused_ordering(338) 00:20:08.480 fused_ordering(339) 00:20:08.480 fused_ordering(340) 00:20:08.480 fused_ordering(341) 00:20:08.480 fused_ordering(342) 00:20:08.480 fused_ordering(343) 00:20:08.480 fused_ordering(344) 00:20:08.480 fused_ordering(345) 00:20:08.480 fused_ordering(346) 00:20:08.480 fused_ordering(347) 00:20:08.480 fused_ordering(348) 00:20:08.480 fused_ordering(349) 00:20:08.480 fused_ordering(350) 00:20:08.480 fused_ordering(351) 00:20:08.480 fused_ordering(352) 00:20:08.480 fused_ordering(353) 00:20:08.480 fused_ordering(354) 00:20:08.480 fused_ordering(355) 00:20:08.480 fused_ordering(356) 00:20:08.480 fused_ordering(357) 00:20:08.480 fused_ordering(358) 00:20:08.480 fused_ordering(359) 00:20:08.480 fused_ordering(360) 00:20:08.480 fused_ordering(361) 00:20:08.480 fused_ordering(362) 00:20:08.480 fused_ordering(363) 00:20:08.480 fused_ordering(364) 00:20:08.480 fused_ordering(365) 00:20:08.480 fused_ordering(366) 00:20:08.480 fused_ordering(367) 00:20:08.480 fused_ordering(368) 00:20:08.480 fused_ordering(369) 00:20:08.480 fused_ordering(370) 00:20:08.480 fused_ordering(371) 00:20:08.480 fused_ordering(372) 00:20:08.480 fused_ordering(373) 00:20:08.480 fused_ordering(374) 00:20:08.480 fused_ordering(375) 00:20:08.480 fused_ordering(376) 00:20:08.480 fused_ordering(377) 00:20:08.480 fused_ordering(378) 00:20:08.480 fused_ordering(379) 00:20:08.480 fused_ordering(380) 00:20:08.480 fused_ordering(381) 00:20:08.480 fused_ordering(382) 00:20:08.480 fused_ordering(383) 00:20:08.480 fused_ordering(384) 00:20:08.480 fused_ordering(385) 00:20:08.480 fused_ordering(386) 00:20:08.480 fused_ordering(387) 00:20:08.480 fused_ordering(388) 00:20:08.480 fused_ordering(389) 00:20:08.480 fused_ordering(390) 00:20:08.480 fused_ordering(391) 00:20:08.480 fused_ordering(392) 00:20:08.480 fused_ordering(393) 00:20:08.480 fused_ordering(394) 00:20:08.480 fused_ordering(395) 00:20:08.480 fused_ordering(396) 00:20:08.480 fused_ordering(397) 00:20:08.480 fused_ordering(398) 00:20:08.480 fused_ordering(399) 00:20:08.480 fused_ordering(400) 00:20:08.480 fused_ordering(401) 00:20:08.480 fused_ordering(402) 00:20:08.480 fused_ordering(403) 00:20:08.480 fused_ordering(404) 00:20:08.480 fused_ordering(405) 00:20:08.480 fused_ordering(406) 00:20:08.480 fused_ordering(407) 00:20:08.480 fused_ordering(408) 00:20:08.480 fused_ordering(409) 00:20:08.480 fused_ordering(410) 00:20:09.047 fused_ordering(411) 00:20:09.047 fused_ordering(412) 00:20:09.047 fused_ordering(413) 00:20:09.047 fused_ordering(414) 00:20:09.047 fused_ordering(415) 00:20:09.047 fused_ordering(416) 00:20:09.047 fused_ordering(417) 00:20:09.047 fused_ordering(418) 00:20:09.047 fused_ordering(419) 00:20:09.047 fused_ordering(420) 00:20:09.047 fused_ordering(421) 00:20:09.047 fused_ordering(422) 00:20:09.047 fused_ordering(423) 00:20:09.047 fused_ordering(424) 00:20:09.047 fused_ordering(425) 00:20:09.047 fused_ordering(426) 00:20:09.047 fused_ordering(427) 00:20:09.047 fused_ordering(428) 00:20:09.047 fused_ordering(429) 00:20:09.047 fused_ordering(430) 00:20:09.047 fused_ordering(431) 00:20:09.047 fused_ordering(432) 00:20:09.047 fused_ordering(433) 00:20:09.047 fused_ordering(434) 00:20:09.047 fused_ordering(435) 00:20:09.047 fused_ordering(436) 00:20:09.047 fused_ordering(437) 00:20:09.047 fused_ordering(438) 00:20:09.047 fused_ordering(439) 00:20:09.047 fused_ordering(440) 00:20:09.047 fused_ordering(441) 00:20:09.047 fused_ordering(442) 00:20:09.047 fused_ordering(443) 00:20:09.047 fused_ordering(444) 00:20:09.047 fused_ordering(445) 00:20:09.047 fused_ordering(446) 00:20:09.047 fused_ordering(447) 00:20:09.047 fused_ordering(448) 00:20:09.047 fused_ordering(449) 00:20:09.047 fused_ordering(450) 00:20:09.047 fused_ordering(451) 00:20:09.047 fused_ordering(452) 00:20:09.047 fused_ordering(453) 00:20:09.047 fused_ordering(454) 00:20:09.047 fused_ordering(455) 00:20:09.047 fused_ordering(456) 00:20:09.047 fused_ordering(457) 00:20:09.047 fused_ordering(458) 00:20:09.047 fused_ordering(459) 00:20:09.047 fused_ordering(460) 00:20:09.047 fused_ordering(461) 00:20:09.047 fused_ordering(462) 00:20:09.047 fused_ordering(463) 00:20:09.047 fused_ordering(464) 00:20:09.047 fused_ordering(465) 00:20:09.047 fused_ordering(466) 00:20:09.047 fused_ordering(467) 00:20:09.047 fused_ordering(468) 00:20:09.047 fused_ordering(469) 00:20:09.047 fused_ordering(470) 00:20:09.047 fused_ordering(471) 00:20:09.047 fused_ordering(472) 00:20:09.047 fused_ordering(473) 00:20:09.047 fused_ordering(474) 00:20:09.047 fused_ordering(475) 00:20:09.047 fused_ordering(476) 00:20:09.047 fused_ordering(477) 00:20:09.047 fused_ordering(478) 00:20:09.047 fused_ordering(479) 00:20:09.047 fused_ordering(480) 00:20:09.047 fused_ordering(481) 00:20:09.047 fused_ordering(482) 00:20:09.047 fused_ordering(483) 00:20:09.047 fused_ordering(484) 00:20:09.047 fused_ordering(485) 00:20:09.047 fused_ordering(486) 00:20:09.047 fused_ordering(487) 00:20:09.047 fused_ordering(488) 00:20:09.047 fused_ordering(489) 00:20:09.047 fused_ordering(490) 00:20:09.047 fused_ordering(491) 00:20:09.047 fused_ordering(492) 00:20:09.047 fused_ordering(493) 00:20:09.047 fused_ordering(494) 00:20:09.047 fused_ordering(495) 00:20:09.047 fused_ordering(496) 00:20:09.047 fused_ordering(497) 00:20:09.047 fused_ordering(498) 00:20:09.047 fused_ordering(499) 00:20:09.047 fused_ordering(500) 00:20:09.047 fused_ordering(501) 00:20:09.047 fused_ordering(502) 00:20:09.047 fused_ordering(503) 00:20:09.047 fused_ordering(504) 00:20:09.047 fused_ordering(505) 00:20:09.047 fused_ordering(506) 00:20:09.047 fused_ordering(507) 00:20:09.047 fused_ordering(508) 00:20:09.047 fused_ordering(509) 00:20:09.047 fused_ordering(510) 00:20:09.047 fused_ordering(511) 00:20:09.047 fused_ordering(512) 00:20:09.047 fused_ordering(513) 00:20:09.047 fused_ordering(514) 00:20:09.047 fused_ordering(515) 00:20:09.047 fused_ordering(516) 00:20:09.047 fused_ordering(517) 00:20:09.047 fused_ordering(518) 00:20:09.047 fused_ordering(519) 00:20:09.047 fused_ordering(520) 00:20:09.047 fused_ordering(521) 00:20:09.047 fused_ordering(522) 00:20:09.047 fused_ordering(523) 00:20:09.047 fused_ordering(524) 00:20:09.047 fused_ordering(525) 00:20:09.047 fused_ordering(526) 00:20:09.047 fused_ordering(527) 00:20:09.047 fused_ordering(528) 00:20:09.047 fused_ordering(529) 00:20:09.047 fused_ordering(530) 00:20:09.047 fused_ordering(531) 00:20:09.047 fused_ordering(532) 00:20:09.047 fused_ordering(533) 00:20:09.047 fused_ordering(534) 00:20:09.047 fused_ordering(535) 00:20:09.047 fused_ordering(536) 00:20:09.047 fused_ordering(537) 00:20:09.047 fused_ordering(538) 00:20:09.047 fused_ordering(539) 00:20:09.047 fused_ordering(540) 00:20:09.047 fused_ordering(541) 00:20:09.047 fused_ordering(542) 00:20:09.047 fused_ordering(543) 00:20:09.047 fused_ordering(544) 00:20:09.047 fused_ordering(545) 00:20:09.047 fused_ordering(546) 00:20:09.047 fused_ordering(547) 00:20:09.047 fused_ordering(548) 00:20:09.047 fused_ordering(549) 00:20:09.047 fused_ordering(550) 00:20:09.047 fused_ordering(551) 00:20:09.047 fused_ordering(552) 00:20:09.047 fused_ordering(553) 00:20:09.047 fused_ordering(554) 00:20:09.047 fused_ordering(555) 00:20:09.047 fused_ordering(556) 00:20:09.047 fused_ordering(557) 00:20:09.047 fused_ordering(558) 00:20:09.047 fused_ordering(559) 00:20:09.047 fused_ordering(560) 00:20:09.047 fused_ordering(561) 00:20:09.047 fused_ordering(562) 00:20:09.047 fused_ordering(563) 00:20:09.047 fused_ordering(564) 00:20:09.047 fused_ordering(565) 00:20:09.047 fused_ordering(566) 00:20:09.047 fused_ordering(567) 00:20:09.047 fused_ordering(568) 00:20:09.047 fused_ordering(569) 00:20:09.047 fused_ordering(570) 00:20:09.047 fused_ordering(571) 00:20:09.047 fused_ordering(572) 00:20:09.047 fused_ordering(573) 00:20:09.047 fused_ordering(574) 00:20:09.047 fused_ordering(575) 00:20:09.047 fused_ordering(576) 00:20:09.047 fused_ordering(577) 00:20:09.047 fused_ordering(578) 00:20:09.047 fused_ordering(579) 00:20:09.047 fused_ordering(580) 00:20:09.047 fused_ordering(581) 00:20:09.047 fused_ordering(582) 00:20:09.047 fused_ordering(583) 00:20:09.047 fused_ordering(584) 00:20:09.047 fused_ordering(585) 00:20:09.047 fused_ordering(586) 00:20:09.047 fused_ordering(587) 00:20:09.047 fused_ordering(588) 00:20:09.047 fused_ordering(589) 00:20:09.047 fused_ordering(590) 00:20:09.047 fused_ordering(591) 00:20:09.047 fused_ordering(592) 00:20:09.047 fused_ordering(593) 00:20:09.047 fused_ordering(594) 00:20:09.047 fused_ordering(595) 00:20:09.047 fused_ordering(596) 00:20:09.047 fused_ordering(597) 00:20:09.047 fused_ordering(598) 00:20:09.047 fused_ordering(599) 00:20:09.047 fused_ordering(600) 00:20:09.047 fused_ordering(601) 00:20:09.048 fused_ordering(602) 00:20:09.048 fused_ordering(603) 00:20:09.048 fused_ordering(604) 00:20:09.048 fused_ordering(605) 00:20:09.048 fused_ordering(606) 00:20:09.048 fused_ordering(607) 00:20:09.048 fused_ordering(608) 00:20:09.048 fused_ordering(609) 00:20:09.048 fused_ordering(610) 00:20:09.048 fused_ordering(611) 00:20:09.048 fused_ordering(612) 00:20:09.048 fused_ordering(613) 00:20:09.048 fused_ordering(614) 00:20:09.048 fused_ordering(615) 00:20:09.619 fused_ordering(616) 00:20:09.619 fused_ordering(617) 00:20:09.619 fused_ordering(618) 00:20:09.619 fused_ordering(619) 00:20:09.619 fused_ordering(620) 00:20:09.619 fused_ordering(621) 00:20:09.619 fused_ordering(622) 00:20:09.619 fused_ordering(623) 00:20:09.619 fused_ordering(624) 00:20:09.619 fused_ordering(625) 00:20:09.619 fused_ordering(626) 00:20:09.619 fused_ordering(627) 00:20:09.619 fused_ordering(628) 00:20:09.619 fused_ordering(629) 00:20:09.619 fused_ordering(630) 00:20:09.619 fused_ordering(631) 00:20:09.619 fused_ordering(632) 00:20:09.619 fused_ordering(633) 00:20:09.619 fused_ordering(634) 00:20:09.619 fused_ordering(635) 00:20:09.619 fused_ordering(636) 00:20:09.619 fused_ordering(637) 00:20:09.619 fused_ordering(638) 00:20:09.619 fused_ordering(639) 00:20:09.619 fused_ordering(640) 00:20:09.619 fused_ordering(641) 00:20:09.619 fused_ordering(642) 00:20:09.619 fused_ordering(643) 00:20:09.619 fused_ordering(644) 00:20:09.619 fused_ordering(645) 00:20:09.619 fused_ordering(646) 00:20:09.619 fused_ordering(647) 00:20:09.619 fused_ordering(648) 00:20:09.619 fused_ordering(649) 00:20:09.619 fused_ordering(650) 00:20:09.619 fused_ordering(651) 00:20:09.619 fused_ordering(652) 00:20:09.619 fused_ordering(653) 00:20:09.619 fused_ordering(654) 00:20:09.619 fused_ordering(655) 00:20:09.619 fused_ordering(656) 00:20:09.619 fused_ordering(657) 00:20:09.619 fused_ordering(658) 00:20:09.619 fused_ordering(659) 00:20:09.619 fused_ordering(660) 00:20:09.619 fused_ordering(661) 00:20:09.619 fused_ordering(662) 00:20:09.619 fused_ordering(663) 00:20:09.619 fused_ordering(664) 00:20:09.619 fused_ordering(665) 00:20:09.619 fused_ordering(666) 00:20:09.619 fused_ordering(667) 00:20:09.619 fused_ordering(668) 00:20:09.619 fused_ordering(669) 00:20:09.619 fused_ordering(670) 00:20:09.619 fused_ordering(671) 00:20:09.619 fused_ordering(672) 00:20:09.619 fused_ordering(673) 00:20:09.619 fused_ordering(674) 00:20:09.619 fused_ordering(675) 00:20:09.619 fused_ordering(676) 00:20:09.619 fused_ordering(677) 00:20:09.619 fused_ordering(678) 00:20:09.619 fused_ordering(679) 00:20:09.619 fused_ordering(680) 00:20:09.619 fused_ordering(681) 00:20:09.619 fused_ordering(682) 00:20:09.619 fused_ordering(683) 00:20:09.619 fused_ordering(684) 00:20:09.619 fused_ordering(685) 00:20:09.619 fused_ordering(686) 00:20:09.619 fused_ordering(687) 00:20:09.619 fused_ordering(688) 00:20:09.619 fused_ordering(689) 00:20:09.619 fused_ordering(690) 00:20:09.619 fused_ordering(691) 00:20:09.619 fused_ordering(692) 00:20:09.619 fused_ordering(693) 00:20:09.619 fused_ordering(694) 00:20:09.619 fused_ordering(695) 00:20:09.619 fused_ordering(696) 00:20:09.619 fused_ordering(697) 00:20:09.619 fused_ordering(698) 00:20:09.619 fused_ordering(699) 00:20:09.619 fused_ordering(700) 00:20:09.619 fused_ordering(701) 00:20:09.619 fused_ordering(702) 00:20:09.619 fused_ordering(703) 00:20:09.619 fused_ordering(704) 00:20:09.619 fused_ordering(705) 00:20:09.619 fused_ordering(706) 00:20:09.619 fused_ordering(707) 00:20:09.619 fused_ordering(708) 00:20:09.619 fused_ordering(709) 00:20:09.619 fused_ordering(710) 00:20:09.619 fused_ordering(711) 00:20:09.619 fused_ordering(712) 00:20:09.619 fused_ordering(713) 00:20:09.619 fused_ordering(714) 00:20:09.619 fused_ordering(715) 00:20:09.619 fused_ordering(716) 00:20:09.619 fused_ordering(717) 00:20:09.619 fused_ordering(718) 00:20:09.619 fused_ordering(719) 00:20:09.619 fused_ordering(720) 00:20:09.619 fused_ordering(721) 00:20:09.619 fused_ordering(722) 00:20:09.619 fused_ordering(723) 00:20:09.619 fused_ordering(724) 00:20:09.619 fused_ordering(725) 00:20:09.619 fused_ordering(726) 00:20:09.619 fused_ordering(727) 00:20:09.619 fused_ordering(728) 00:20:09.619 fused_ordering(729) 00:20:09.619 fused_ordering(730) 00:20:09.619 fused_ordering(731) 00:20:09.619 fused_ordering(732) 00:20:09.619 fused_ordering(733) 00:20:09.619 fused_ordering(734) 00:20:09.619 fused_ordering(735) 00:20:09.619 fused_ordering(736) 00:20:09.619 fused_ordering(737) 00:20:09.619 fused_ordering(738) 00:20:09.619 fused_ordering(739) 00:20:09.619 fused_ordering(740) 00:20:09.619 fused_ordering(741) 00:20:09.619 fused_ordering(742) 00:20:09.619 fused_ordering(743) 00:20:09.619 fused_ordering(744) 00:20:09.619 fused_ordering(745) 00:20:09.619 fused_ordering(746) 00:20:09.619 fused_ordering(747) 00:20:09.619 fused_ordering(748) 00:20:09.619 fused_ordering(749) 00:20:09.619 fused_ordering(750) 00:20:09.619 fused_ordering(751) 00:20:09.619 fused_ordering(752) 00:20:09.619 fused_ordering(753) 00:20:09.619 fused_ordering(754) 00:20:09.619 fused_ordering(755) 00:20:09.619 fused_ordering(756) 00:20:09.619 fused_ordering(757) 00:20:09.619 fused_ordering(758) 00:20:09.619 fused_ordering(759) 00:20:09.619 fused_ordering(760) 00:20:09.619 fused_ordering(761) 00:20:09.619 fused_ordering(762) 00:20:09.619 fused_ordering(763) 00:20:09.619 fused_ordering(764) 00:20:09.619 fused_ordering(765) 00:20:09.619 fused_ordering(766) 00:20:09.619 fused_ordering(767) 00:20:09.619 fused_ordering(768) 00:20:09.619 fused_ordering(769) 00:20:09.619 fused_ordering(770) 00:20:09.619 fused_ordering(771) 00:20:09.619 fused_ordering(772) 00:20:09.619 fused_ordering(773) 00:20:09.619 fused_ordering(774) 00:20:09.619 fused_ordering(775) 00:20:09.619 fused_ordering(776) 00:20:09.619 fused_ordering(777) 00:20:09.619 fused_ordering(778) 00:20:09.619 fused_ordering(779) 00:20:09.619 fused_ordering(780) 00:20:09.619 fused_ordering(781) 00:20:09.619 fused_ordering(782) 00:20:09.619 fused_ordering(783) 00:20:09.619 fused_ordering(784) 00:20:09.619 fused_ordering(785) 00:20:09.619 fused_ordering(786) 00:20:09.619 fused_ordering(787) 00:20:09.619 fused_ordering(788) 00:20:09.619 fused_ordering(789) 00:20:09.619 fused_ordering(790) 00:20:09.619 fused_ordering(791) 00:20:09.619 fused_ordering(792) 00:20:09.619 fused_ordering(793) 00:20:09.620 fused_ordering(794) 00:20:09.620 fused_ordering(795) 00:20:09.620 fused_ordering(796) 00:20:09.620 fused_ordering(797) 00:20:09.620 fused_ordering(798) 00:20:09.620 fused_ordering(799) 00:20:09.620 fused_ordering(800) 00:20:09.620 fused_ordering(801) 00:20:09.620 fused_ordering(802) 00:20:09.620 fused_ordering(803) 00:20:09.620 fused_ordering(804) 00:20:09.620 fused_ordering(805) 00:20:09.620 fused_ordering(806) 00:20:09.620 fused_ordering(807) 00:20:09.620 fused_ordering(808) 00:20:09.620 fused_ordering(809) 00:20:09.620 fused_ordering(810) 00:20:09.620 fused_ordering(811) 00:20:09.620 fused_ordering(812) 00:20:09.620 fused_ordering(813) 00:20:09.620 fused_ordering(814) 00:20:09.620 fused_ordering(815) 00:20:09.620 fused_ordering(816) 00:20:09.620 fused_ordering(817) 00:20:09.620 fused_ordering(818) 00:20:09.620 fused_ordering(819) 00:20:09.620 fused_ordering(820) 00:20:10.192 fused_ordering(821) 00:20:10.192 fused_ordering(822) 00:20:10.192 fused_ordering(823) 00:20:10.192 fused_ordering(824) 00:20:10.192 fused_ordering(825) 00:20:10.192 fused_ordering(826) 00:20:10.192 fused_ordering(827) 00:20:10.192 fused_ordering(828) 00:20:10.192 fused_ordering(829) 00:20:10.192 fused_ordering(830) 00:20:10.192 fused_ordering(831) 00:20:10.192 fused_ordering(832) 00:20:10.192 fused_ordering(833) 00:20:10.192 fused_ordering(834) 00:20:10.192 fused_ordering(835) 00:20:10.192 fused_ordering(836) 00:20:10.192 fused_ordering(837) 00:20:10.192 fused_ordering(838) 00:20:10.192 fused_ordering(839) 00:20:10.192 fused_ordering(840) 00:20:10.192 fused_ordering(841) 00:20:10.192 fused_ordering(842) 00:20:10.192 fused_ordering(843) 00:20:10.192 fused_ordering(844) 00:20:10.192 fused_ordering(845) 00:20:10.192 fused_ordering(846) 00:20:10.192 fused_ordering(847) 00:20:10.192 fused_ordering(848) 00:20:10.192 fused_ordering(849) 00:20:10.193 fused_ordering(850) 00:20:10.193 fused_ordering(851) 00:20:10.193 fused_ordering(852) 00:20:10.193 fused_ordering(853) 00:20:10.193 fused_ordering(854) 00:20:10.193 fused_ordering(855) 00:20:10.193 fused_ordering(856) 00:20:10.193 fused_ordering(857) 00:20:10.193 fused_ordering(858) 00:20:10.193 fused_ordering(859) 00:20:10.193 fused_ordering(860) 00:20:10.193 fused_ordering(861) 00:20:10.193 fused_ordering(862) 00:20:10.193 fused_ordering(863) 00:20:10.193 fused_ordering(864) 00:20:10.193 fused_ordering(865) 00:20:10.193 fused_ordering(866) 00:20:10.193 fused_ordering(867) 00:20:10.193 fused_ordering(868) 00:20:10.193 fused_ordering(869) 00:20:10.193 fused_ordering(870) 00:20:10.193 fused_ordering(871) 00:20:10.193 fused_ordering(872) 00:20:10.193 fused_ordering(873) 00:20:10.193 fused_ordering(874) 00:20:10.193 fused_ordering(875) 00:20:10.193 fused_ordering(876) 00:20:10.193 fused_ordering(877) 00:20:10.193 fused_ordering(878) 00:20:10.193 fused_ordering(879) 00:20:10.193 fused_ordering(880) 00:20:10.193 fused_ordering(881) 00:20:10.193 fused_ordering(882) 00:20:10.193 fused_ordering(883) 00:20:10.193 fused_ordering(884) 00:20:10.193 fused_ordering(885) 00:20:10.193 fused_ordering(886) 00:20:10.193 fused_ordering(887) 00:20:10.193 fused_ordering(888) 00:20:10.193 fused_ordering(889) 00:20:10.193 fused_ordering(890) 00:20:10.193 fused_ordering(891) 00:20:10.193 fused_ordering(892) 00:20:10.193 fused_ordering(893) 00:20:10.193 fused_ordering(894) 00:20:10.193 fused_ordering(895) 00:20:10.193 fused_ordering(896) 00:20:10.193 fused_ordering(897) 00:20:10.193 fused_ordering(898) 00:20:10.193 fused_ordering(899) 00:20:10.193 fused_ordering(900) 00:20:10.193 fused_ordering(901) 00:20:10.193 fused_ordering(902) 00:20:10.193 fused_ordering(903) 00:20:10.193 fused_ordering(904) 00:20:10.193 fused_ordering(905) 00:20:10.193 fused_ordering(906) 00:20:10.193 fused_ordering(907) 00:20:10.193 fused_ordering(908) 00:20:10.193 fused_ordering(909) 00:20:10.193 fused_ordering(910) 00:20:10.193 fused_ordering(911) 00:20:10.193 fused_ordering(912) 00:20:10.193 fused_ordering(913) 00:20:10.193 fused_ordering(914) 00:20:10.193 fused_ordering(915) 00:20:10.193 fused_ordering(916) 00:20:10.193 fused_ordering(917) 00:20:10.193 fused_ordering(918) 00:20:10.193 fused_ordering(919) 00:20:10.193 fused_ordering(920) 00:20:10.193 fused_ordering(921) 00:20:10.193 fused_ordering(922) 00:20:10.193 fused_ordering(923) 00:20:10.193 fused_ordering(924) 00:20:10.193 fused_ordering(925) 00:20:10.193 fused_ordering(926) 00:20:10.193 fused_ordering(927) 00:20:10.193 fused_ordering(928) 00:20:10.193 fused_ordering(929) 00:20:10.193 fused_ordering(930) 00:20:10.193 fused_ordering(931) 00:20:10.193 fused_ordering(932) 00:20:10.193 fused_ordering(933) 00:20:10.193 fused_ordering(934) 00:20:10.193 fused_ordering(935) 00:20:10.193 fused_ordering(936) 00:20:10.193 fused_ordering(937) 00:20:10.193 fused_ordering(938) 00:20:10.193 fused_ordering(939) 00:20:10.193 fused_ordering(940) 00:20:10.193 fused_ordering(941) 00:20:10.193 fused_ordering(942) 00:20:10.193 fused_ordering(943) 00:20:10.193 fused_ordering(944) 00:20:10.193 fused_ordering(945) 00:20:10.193 fused_ordering(946) 00:20:10.193 fused_ordering(947) 00:20:10.193 fused_ordering(948) 00:20:10.193 fused_ordering(949) 00:20:10.193 fused_ordering(950) 00:20:10.193 fused_ordering(951) 00:20:10.193 fused_ordering(952) 00:20:10.193 fused_ordering(953) 00:20:10.193 fused_ordering(954) 00:20:10.193 fused_ordering(955) 00:20:10.193 fused_ordering(956) 00:20:10.193 fused_ordering(957) 00:20:10.193 fused_ordering(958) 00:20:10.193 fused_ordering(959) 00:20:10.193 fused_ordering(960) 00:20:10.193 fused_ordering(961) 00:20:10.193 fused_ordering(962) 00:20:10.193 fused_ordering(963) 00:20:10.193 fused_ordering(964) 00:20:10.193 fused_ordering(965) 00:20:10.193 fused_ordering(966) 00:20:10.193 fused_ordering(967) 00:20:10.193 fused_ordering(968) 00:20:10.193 fused_ordering(969) 00:20:10.193 fused_ordering(970) 00:20:10.193 fused_ordering(971) 00:20:10.193 fused_ordering(972) 00:20:10.193 fused_ordering(973) 00:20:10.193 fused_ordering(974) 00:20:10.193 fused_ordering(975) 00:20:10.193 fused_ordering(976) 00:20:10.193 fused_ordering(977) 00:20:10.193 fused_ordering(978) 00:20:10.193 fused_ordering(979) 00:20:10.193 fused_ordering(980) 00:20:10.193 fused_ordering(981) 00:20:10.193 fused_ordering(982) 00:20:10.193 fused_ordering(983) 00:20:10.193 fused_ordering(984) 00:20:10.193 fused_ordering(985) 00:20:10.193 fused_ordering(986) 00:20:10.193 fused_ordering(987) 00:20:10.193 fused_ordering(988) 00:20:10.193 fused_ordering(989) 00:20:10.193 fused_ordering(990) 00:20:10.193 fused_ordering(991) 00:20:10.193 fused_ordering(992) 00:20:10.193 fused_ordering(993) 00:20:10.193 fused_ordering(994) 00:20:10.193 fused_ordering(995) 00:20:10.193 fused_ordering(996) 00:20:10.193 fused_ordering(997) 00:20:10.193 fused_ordering(998) 00:20:10.193 fused_ordering(999) 00:20:10.193 fused_ordering(1000) 00:20:10.193 fused_ordering(1001) 00:20:10.193 fused_ordering(1002) 00:20:10.193 fused_ordering(1003) 00:20:10.193 fused_ordering(1004) 00:20:10.193 fused_ordering(1005) 00:20:10.193 fused_ordering(1006) 00:20:10.193 fused_ordering(1007) 00:20:10.193 fused_ordering(1008) 00:20:10.193 fused_ordering(1009) 00:20:10.193 fused_ordering(1010) 00:20:10.193 fused_ordering(1011) 00:20:10.193 fused_ordering(1012) 00:20:10.193 fused_ordering(1013) 00:20:10.193 fused_ordering(1014) 00:20:10.193 fused_ordering(1015) 00:20:10.193 fused_ordering(1016) 00:20:10.193 fused_ordering(1017) 00:20:10.193 fused_ordering(1018) 00:20:10.193 fused_ordering(1019) 00:20:10.193 fused_ordering(1020) 00:20:10.193 fused_ordering(1021) 00:20:10.193 fused_ordering(1022) 00:20:10.193 fused_ordering(1023) 00:20:10.193 16:19:09 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:10.193 16:19:09 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:10.193 16:19:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.193 16:19:09 -- nvmf/common.sh@116 -- # sync 00:20:10.193 16:19:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:10.193 16:19:09 -- nvmf/common.sh@119 -- # set +e 00:20:10.193 16:19:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.193 16:19:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:10.193 rmmod nvme_tcp 00:20:10.193 rmmod nvme_fabrics 00:20:10.193 rmmod nvme_keyring 00:20:10.193 16:19:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.193 16:19:09 -- nvmf/common.sh@123 -- # set -e 00:20:10.193 16:19:09 -- nvmf/common.sh@124 -- # return 0 00:20:10.193 16:19:09 -- nvmf/common.sh@477 -- # '[' -n 3089809 ']' 00:20:10.193 16:19:09 -- nvmf/common.sh@478 -- # killprocess 3089809 00:20:10.193 16:19:09 -- common/autotest_common.sh@926 -- # '[' -z 3089809 ']' 00:20:10.193 16:19:09 -- common/autotest_common.sh@930 -- # kill -0 3089809 00:20:10.193 16:19:09 -- common/autotest_common.sh@931 -- # uname 00:20:10.193 16:19:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.193 16:19:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3089809 00:20:10.451 16:19:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.451 16:19:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.451 16:19:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3089809' 00:20:10.451 killing process with pid 3089809 00:20:10.451 16:19:09 -- common/autotest_common.sh@945 -- # kill 3089809 00:20:10.451 16:19:09 -- common/autotest_common.sh@950 -- # wait 3089809 00:20:10.710 16:19:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.710 16:19:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.710 16:19:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.710 16:19:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.710 16:19:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.710 16:19:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.710 16:19:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.710 16:19:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.250 16:19:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:13.250 00:20:13.250 real 0m11.615s 00:20:13.250 user 0m6.744s 00:20:13.250 sys 0m5.749s 00:20:13.250 16:19:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.250 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.250 ************************************ 00:20:13.250 END TEST nvmf_fused_ordering 00:20:13.250 ************************************ 00:20:13.250 16:19:11 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:13.250 16:19:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:13.250 16:19:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.250 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.250 ************************************ 00:20:13.250 START TEST nvmf_delete_subsystem 00:20:13.250 ************************************ 00:20:13.250 16:19:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:13.250 * Looking for test storage... 00:20:13.250 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:13.250 16:19:11 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.250 16:19:11 -- nvmf/common.sh@7 -- # uname -s 00:20:13.250 16:19:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.250 16:19:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.250 16:19:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.250 16:19:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.250 16:19:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.250 16:19:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.250 16:19:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.250 16:19:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.250 16:19:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.250 16:19:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.250 16:19:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:13.250 16:19:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:13.250 16:19:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.250 16:19:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.250 16:19:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:13.250 16:19:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:13.250 16:19:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.250 16:19:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.250 16:19:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.250 16:19:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.250 16:19:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.250 16:19:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.250 16:19:11 -- paths/export.sh@5 -- # export PATH 00:20:13.250 16:19:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.250 16:19:11 -- nvmf/common.sh@46 -- # : 0 00:20:13.250 16:19:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:13.250 16:19:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:13.250 16:19:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:13.250 16:19:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.250 16:19:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.250 16:19:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:13.250 16:19:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:13.250 16:19:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:13.250 16:19:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:13.250 16:19:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:13.250 16:19:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.250 16:19:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:13.250 16:19:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:13.250 16:19:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:13.250 16:19:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.251 16:19:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.251 16:19:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.251 16:19:11 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:13.251 16:19:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:13.251 16:19:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:13.251 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 16:19:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:18.527 16:19:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:18.527 16:19:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:18.527 16:19:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:18.527 16:19:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:18.527 16:19:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:18.527 16:19:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:18.527 16:19:16 -- nvmf/common.sh@294 -- # net_devs=() 00:20:18.527 16:19:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:18.527 16:19:16 -- nvmf/common.sh@295 -- # e810=() 00:20:18.527 16:19:16 -- nvmf/common.sh@295 -- # local -ga e810 00:20:18.527 16:19:16 -- nvmf/common.sh@296 -- # x722=() 00:20:18.527 16:19:16 -- nvmf/common.sh@296 -- # local -ga x722 00:20:18.527 16:19:16 -- nvmf/common.sh@297 -- # mlx=() 00:20:18.527 16:19:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:18.527 16:19:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.527 16:19:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:18.527 16:19:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.527 16:19:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:18.527 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:18.527 16:19:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:18.527 16:19:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:18.527 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:18.527 16:19:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.527 16:19:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.527 16:19:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.527 16:19:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:18.527 Found net devices under 0000:27:00.0: cvl_0_0 00:20:18.527 16:19:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.527 16:19:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:18.527 16:19:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.527 16:19:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.527 16:19:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:18.527 Found net devices under 0000:27:00.1: cvl_0_1 00:20:18.527 16:19:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.527 16:19:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:18.527 16:19:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:18.527 16:19:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:18.527 16:19:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.527 16:19:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.527 16:19:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.527 16:19:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:18.527 16:19:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.527 16:19:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.527 16:19:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:18.527 16:19:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.527 16:19:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.527 16:19:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:18.527 16:19:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:18.527 16:19:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.527 16:19:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.527 16:19:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.527 16:19:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.527 16:19:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:18.527 16:19:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.527 16:19:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.527 16:19:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.527 16:19:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:18.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:20:18.527 00:20:18.527 --- 10.0.0.2 ping statistics --- 00:20:18.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.527 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:20:18.527 16:19:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:20:18.527 00:20:18.527 --- 10.0.0.1 ping statistics --- 00:20:18.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.527 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:18.527 16:19:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.527 16:19:17 -- nvmf/common.sh@410 -- # return 0 00:20:18.527 16:19:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.527 16:19:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.527 16:19:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.527 16:19:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.527 16:19:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.527 16:19:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.527 16:19:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.527 16:19:17 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:18.527 16:19:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.527 16:19:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.527 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 16:19:17 -- nvmf/common.sh@469 -- # nvmfpid=3094610 00:20:18.527 16:19:17 -- nvmf/common.sh@470 -- # waitforlisten 3094610 00:20:18.527 16:19:17 -- common/autotest_common.sh@819 -- # '[' -z 3094610 ']' 00:20:18.527 16:19:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:18.527 16:19:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.527 16:19:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.527 16:19:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.527 16:19:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.527 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 [2024-04-23 16:19:17.264949] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:18.527 [2024-04-23 16:19:17.265051] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.527 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.527 [2024-04-23 16:19:17.385007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:18.789 [2024-04-23 16:19:17.481859] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:18.789 [2024-04-23 16:19:17.482033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.789 [2024-04-23 16:19:17.482046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.789 [2024-04-23 16:19:17.482056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.789 [2024-04-23 16:19:17.482119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.789 [2024-04-23 16:19:17.482128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.050 16:19:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.050 16:19:17 -- common/autotest_common.sh@852 -- # return 0 00:20:19.050 16:19:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.050 16:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.050 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 16:19:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 [2024-04-23 16:19:18.026371] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 [2024-04-23 16:19:18.042565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 NULL1 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 Delay0 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:19.311 16:19:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.311 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.311 16:19:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@28 -- # perf_pid=3094658 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:19.311 16:19:18 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:19.311 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.311 [2024-04-23 16:19:18.167535] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:21.220 16:19:20 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.220 16:19:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.220 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 [2024-04-23 16:19:20.305840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002a40 is same with the state(5) to be set 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Write completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.479 starting I/O failed: -6 00:20:21.479 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 starting I/O failed: -6 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 starting I/O failed: -6 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 starting I/O failed: -6 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 starting I/O failed: -6 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 [2024-04-23 16:19:20.307384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000010340 is same with the state(5) to be set 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Write completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 Read completed with error (sct=0, sc=8) 00:20:21.480 [2024-04-23 16:19:20.308213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61300000ffc0 is same with the state(5) to be set 00:20:22.421 [2024-04-23 16:19:21.266283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002180 is same with the state(5) to be set 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Write completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.421 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 [2024-04-23 16:19:21.307706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000026c0 is same with the state(5) to be set 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 [2024-04-23 16:19:21.307951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002340 is same with the state(5) to be set 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 [2024-04-23 16:19:21.308187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000002dc0 is same with the state(5) to be set 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Read completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 Write completed with error (sct=0, sc=8) 00:20:22.422 [2024-04-23 16:19:21.308316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000106c0 is same with the state(5) to be set 00:20:22.422 [2024-04-23 16:19:21.310911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002180 (9): Bad file descriptor 00:20:22.422 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:22.422 16:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.422 16:19:21 -- target/delete_subsystem.sh@34 -- # delay=0 00:20:22.422 16:19:21 -- target/delete_subsystem.sh@35 -- # kill -0 3094658 00:20:22.422 16:19:21 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:22.422 Initializing NVMe Controllers 00:20:22.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.422 Controller IO queue size 128, less than required. 00:20:22.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:22.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:22.422 Initialization complete. Launching workers. 00:20:22.422 ======================================================== 00:20:22.422 Latency(us) 00:20:22.422 Device Information : IOPS MiB/s Average min max 00:20:22.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.26 0.09 945694.55 1577.31 1012470.29 00:20:22.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.02 0.07 886010.75 423.11 1013065.28 00:20:22.422 ======================================================== 00:20:22.422 Total : 346.29 0.17 919320.65 423.11 1013065.28 00:20:22.422 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@35 -- # kill -0 3094658 00:20:22.992 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3094658) - No such process 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@45 -- # NOT wait 3094658 00:20:22.992 16:19:21 -- common/autotest_common.sh@640 -- # local es=0 00:20:22.992 16:19:21 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3094658 00:20:22.992 16:19:21 -- common/autotest_common.sh@628 -- # local arg=wait 00:20:22.992 16:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.992 16:19:21 -- common/autotest_common.sh@632 -- # type -t wait 00:20:22.992 16:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:22.992 16:19:21 -- common/autotest_common.sh@643 -- # wait 3094658 00:20:22.992 16:19:21 -- common/autotest_common.sh@643 -- # es=1 00:20:22.992 16:19:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:22.992 16:19:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:22.992 16:19:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:22.992 16:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.992 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 16:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.992 16:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.992 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 [2024-04-23 16:19:21.841404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.992 16:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:22.992 16:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.992 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.992 16:19:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@54 -- # perf_pid=3095534 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@56 -- # delay=0 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:22.992 16:19:21 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:23.250 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.251 [2024-04-23 16:19:21.955214] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:23.509 16:19:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:23.509 16:19:22 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:23.509 16:19:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:24.079 16:19:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:24.079 16:19:22 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:24.079 16:19:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:24.648 16:19:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:24.648 16:19:23 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:24.648 16:19:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:25.215 16:19:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:25.215 16:19:23 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:25.215 16:19:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:25.473 16:19:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:25.473 16:19:24 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:25.473 16:19:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:26.044 16:19:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:26.044 16:19:24 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:26.044 16:19:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:26.305 Initializing NVMe Controllers 00:20:26.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.305 Controller IO queue size 128, less than required. 00:20:26.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:26.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:26.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:26.305 Initialization complete. Launching workers. 00:20:26.305 ======================================================== 00:20:26.305 Latency(us) 00:20:26.305 Device Information : IOPS MiB/s Average min max 00:20:26.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004553.41 1000218.52 1043348.06 00:20:26.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004672.66 1000340.82 1010399.20 00:20:26.305 ======================================================== 00:20:26.305 Total : 256.00 0.12 1004613.03 1000218.52 1043348.06 00:20:26.305 00:20:26.566 16:19:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:26.566 16:19:25 -- target/delete_subsystem.sh@57 -- # kill -0 3095534 00:20:26.566 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3095534) - No such process 00:20:26.566 16:19:25 -- target/delete_subsystem.sh@67 -- # wait 3095534 00:20:26.566 16:19:25 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:26.566 16:19:25 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:26.566 16:19:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:26.566 16:19:25 -- nvmf/common.sh@116 -- # sync 00:20:26.566 16:19:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:26.566 16:19:25 -- nvmf/common.sh@119 -- # set +e 00:20:26.566 16:19:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:26.566 16:19:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:26.566 rmmod nvme_tcp 00:20:26.566 rmmod nvme_fabrics 00:20:26.566 rmmod nvme_keyring 00:20:26.566 16:19:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:26.566 16:19:25 -- nvmf/common.sh@123 -- # set -e 00:20:26.566 16:19:25 -- nvmf/common.sh@124 -- # return 0 00:20:26.566 16:19:25 -- nvmf/common.sh@477 -- # '[' -n 3094610 ']' 00:20:26.566 16:19:25 -- nvmf/common.sh@478 -- # killprocess 3094610 00:20:26.566 16:19:25 -- common/autotest_common.sh@926 -- # '[' -z 3094610 ']' 00:20:26.566 16:19:25 -- common/autotest_common.sh@930 -- # kill -0 3094610 00:20:26.566 16:19:25 -- common/autotest_common.sh@931 -- # uname 00:20:26.566 16:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.566 16:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3094610 00:20:26.824 16:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:26.824 16:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:26.824 16:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3094610' 00:20:26.824 killing process with pid 3094610 00:20:26.824 16:19:25 -- common/autotest_common.sh@945 -- # kill 3094610 00:20:26.824 16:19:25 -- common/autotest_common.sh@950 -- # wait 3094610 00:20:27.082 16:19:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:27.082 16:19:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:27.082 16:19:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:27.082 16:19:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.082 16:19:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:27.082 16:19:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.082 16:19:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.082 16:19:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.623 16:19:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:29.623 00:20:29.623 real 0m16.308s 00:20:29.623 user 0m30.620s 00:20:29.623 sys 0m4.780s 00:20:29.623 16:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.623 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.623 ************************************ 00:20:29.623 END TEST nvmf_delete_subsystem 00:20:29.623 ************************************ 00:20:29.623 16:19:28 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:20:29.623 16:19:28 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:20:29.623 16:19:28 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:29.623 16:19:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:29.623 16:19:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:29.623 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:20:29.623 ************************************ 00:20:29.623 START TEST nvmf_host_management 00:20:29.623 ************************************ 00:20:29.623 16:19:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:29.623 * Looking for test storage... 00:20:29.623 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:29.623 16:19:28 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.623 16:19:28 -- nvmf/common.sh@7 -- # uname -s 00:20:29.623 16:19:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.623 16:19:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.623 16:19:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.623 16:19:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.623 16:19:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.623 16:19:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.623 16:19:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.623 16:19:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.623 16:19:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.623 16:19:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.623 16:19:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:29.623 16:19:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:29.623 16:19:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.623 16:19:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.623 16:19:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:29.623 16:19:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:29.623 16:19:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.623 16:19:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.623 16:19:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.623 16:19:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.623 16:19:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.624 16:19:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.624 16:19:28 -- paths/export.sh@5 -- # export PATH 00:20:29.624 16:19:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.624 16:19:28 -- nvmf/common.sh@46 -- # : 0 00:20:29.624 16:19:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.624 16:19:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.624 16:19:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.624 16:19:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.624 16:19:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.624 16:19:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.624 16:19:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.624 16:19:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.624 16:19:28 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.624 16:19:28 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.624 16:19:28 -- target/host_management.sh@104 -- # nvmftestinit 00:20:29.624 16:19:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.624 16:19:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.624 16:19:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.624 16:19:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.624 16:19:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.624 16:19:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.624 16:19:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.624 16:19:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.624 16:19:28 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:29.624 16:19:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:29.624 16:19:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:29.624 16:19:28 -- common/autotest_common.sh@10 -- # set +x 00:20:34.916 16:19:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.916 16:19:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:34.916 16:19:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:34.916 16:19:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:34.916 16:19:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:34.916 16:19:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:34.916 16:19:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:34.916 16:19:33 -- nvmf/common.sh@294 -- # net_devs=() 00:20:34.916 16:19:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:34.916 16:19:33 -- nvmf/common.sh@295 -- # e810=() 00:20:34.916 16:19:33 -- nvmf/common.sh@295 -- # local -ga e810 00:20:34.916 16:19:33 -- nvmf/common.sh@296 -- # x722=() 00:20:34.916 16:19:33 -- nvmf/common.sh@296 -- # local -ga x722 00:20:34.916 16:19:33 -- nvmf/common.sh@297 -- # mlx=() 00:20:34.916 16:19:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:34.916 16:19:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.916 16:19:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:34.916 16:19:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.916 16:19:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:34.916 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:34.916 16:19:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:34.916 16:19:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:34.916 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:34.916 16:19:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.916 16:19:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.916 16:19:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.916 16:19:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:34.916 Found net devices under 0000:27:00.0: cvl_0_0 00:20:34.916 16:19:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.916 16:19:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:34.916 16:19:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.916 16:19:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.916 16:19:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:34.916 Found net devices under 0000:27:00.1: cvl_0_1 00:20:34.916 16:19:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.916 16:19:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:34.916 16:19:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:34.916 16:19:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.916 16:19:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.916 16:19:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.916 16:19:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:34.916 16:19:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.916 16:19:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.916 16:19:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:34.916 16:19:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.916 16:19:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.916 16:19:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:34.916 16:19:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:34.916 16:19:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.916 16:19:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.916 16:19:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.916 16:19:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.916 16:19:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:34.916 16:19:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.916 16:19:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.916 16:19:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.916 16:19:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:34.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:20:34.916 00:20:34.916 --- 10.0.0.2 ping statistics --- 00:20:34.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.916 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:20:34.916 16:19:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:20:34.916 00:20:34.916 --- 10.0.0.1 ping statistics --- 00:20:34.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.916 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:20:34.916 16:19:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.916 16:19:33 -- nvmf/common.sh@410 -- # return 0 00:20:34.916 16:19:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:34.916 16:19:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.916 16:19:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:34.916 16:19:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.916 16:19:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:34.916 16:19:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:34.916 16:19:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:20:34.916 16:19:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:34.916 16:19:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:34.916 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.916 ************************************ 00:20:34.916 START TEST nvmf_host_management 00:20:34.916 ************************************ 00:20:34.916 16:19:33 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:20:34.916 16:19:33 -- target/host_management.sh@69 -- # starttarget 00:20:34.916 16:19:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:34.916 16:19:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:34.916 16:19:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:34.916 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.916 16:19:33 -- nvmf/common.sh@469 -- # nvmfpid=3100043 00:20:34.916 16:19:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.916 16:19:33 -- nvmf/common.sh@470 -- # waitforlisten 3100043 00:20:34.916 16:19:33 -- common/autotest_common.sh@819 -- # '[' -z 3100043 ']' 00:20:34.916 16:19:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.916 16:19:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.917 16:19:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.917 16:19:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.917 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:20:34.917 [2024-04-23 16:19:33.650079] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:34.917 [2024-04-23 16:19:33.650150] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.917 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.917 [2024-04-23 16:19:33.742279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.917 [2024-04-23 16:19:33.843624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:34.917 [2024-04-23 16:19:33.843813] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.917 [2024-04-23 16:19:33.843829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.917 [2024-04-23 16:19:33.843839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.917 [2024-04-23 16:19:33.843911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.917 [2024-04-23 16:19:33.844041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.917 [2024-04-23 16:19:33.844065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.917 [2024-04-23 16:19:33.844092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.488 16:19:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.488 16:19:34 -- common/autotest_common.sh@852 -- # return 0 00:20:35.488 16:19:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:35.488 16:19:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:35.488 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.488 16:19:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.488 16:19:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.488 16:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:35.488 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.488 [2024-04-23 16:19:34.411136] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.488 16:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:35.750 16:19:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:35.750 16:19:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.750 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.750 16:19:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.750 16:19:34 -- target/host_management.sh@23 -- # cat 00:20:35.750 16:19:34 -- target/host_management.sh@30 -- # rpc_cmd 00:20:35.750 16:19:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:35.750 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.750 Malloc0 00:20:35.750 [2024-04-23 16:19:34.489453] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.750 16:19:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:35.750 16:19:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:35.750 16:19:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:35.750 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.750 16:19:34 -- target/host_management.sh@73 -- # perfpid=3100374 00:20:35.750 16:19:34 -- target/host_management.sh@74 -- # waitforlisten 3100374 /var/tmp/bdevperf.sock 00:20:35.750 16:19:34 -- common/autotest_common.sh@819 -- # '[' -z 3100374 ']' 00:20:35.750 16:19:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.750 16:19:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.750 16:19:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.750 16:19:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.750 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:20:35.750 16:19:34 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:35.750 16:19:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:35.750 16:19:34 -- nvmf/common.sh@520 -- # config=() 00:20:35.750 16:19:34 -- nvmf/common.sh@520 -- # local subsystem config 00:20:35.750 16:19:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:35.750 16:19:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:35.750 { 00:20:35.750 "params": { 00:20:35.750 "name": "Nvme$subsystem", 00:20:35.750 "trtype": "$TEST_TRANSPORT", 00:20:35.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.750 "adrfam": "ipv4", 00:20:35.750 "trsvcid": "$NVMF_PORT", 00:20:35.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.750 "hdgst": ${hdgst:-false}, 00:20:35.750 "ddgst": ${ddgst:-false} 00:20:35.750 }, 00:20:35.750 "method": "bdev_nvme_attach_controller" 00:20:35.750 } 00:20:35.750 EOF 00:20:35.750 )") 00:20:35.750 16:19:34 -- nvmf/common.sh@542 -- # cat 00:20:35.750 16:19:34 -- nvmf/common.sh@544 -- # jq . 00:20:35.750 16:19:34 -- nvmf/common.sh@545 -- # IFS=, 00:20:35.750 16:19:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:35.750 "params": { 00:20:35.750 "name": "Nvme0", 00:20:35.750 "trtype": "tcp", 00:20:35.750 "traddr": "10.0.0.2", 00:20:35.750 "adrfam": "ipv4", 00:20:35.750 "trsvcid": "4420", 00:20:35.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:35.750 "hdgst": false, 00:20:35.750 "ddgst": false 00:20:35.750 }, 00:20:35.750 "method": "bdev_nvme_attach_controller" 00:20:35.750 }' 00:20:35.750 [2024-04-23 16:19:34.625687] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:35.750 [2024-04-23 16:19:34.625837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100374 ] 00:20:36.012 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.012 [2024-04-23 16:19:34.760276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.012 [2024-04-23 16:19:34.856781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.273 Running I/O for 10 seconds... 00:20:36.533 16:19:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.533 16:19:35 -- common/autotest_common.sh@852 -- # return 0 00:20:36.533 16:19:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:36.533 16:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.533 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:20:36.533 16:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.533 16:19:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.533 16:19:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:36.533 16:19:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:36.533 16:19:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:36.533 16:19:35 -- target/host_management.sh@52 -- # local ret=1 00:20:36.533 16:19:35 -- target/host_management.sh@53 -- # local i 00:20:36.533 16:19:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:36.533 16:19:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:36.533 16:19:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:36.533 16:19:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:36.533 16:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.533 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:20:36.533 16:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.533 16:19:35 -- target/host_management.sh@55 -- # read_io_count=750 00:20:36.533 16:19:35 -- target/host_management.sh@58 -- # '[' 750 -ge 100 ']' 00:20:36.533 16:19:35 -- target/host_management.sh@59 -- # ret=0 00:20:36.533 16:19:35 -- target/host_management.sh@60 -- # break 00:20:36.533 16:19:35 -- target/host_management.sh@64 -- # return 0 00:20:36.533 16:19:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:36.533 16:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.533 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:20:36.533 [2024-04-23 16:19:35.402753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.533 [2024-04-23 16:19:35.402927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.402999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.534 [2024-04-23 16:19:35.403777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.534 [2024-04-23 16:19:35.403805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.534 [2024-04-23 16:19:35.403822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.534 [2024-04-23 16:19:35.403840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003140 is same with the state(5) to be set 00:20:36.534 [2024-04-23 16:19:35.403922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.403934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.403964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.403982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.403992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.534 [2024-04-23 16:19:35.404166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.534 [2024-04-23 16:19:35.404174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.535 [2024-04-23 16:19:35.404882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.535 [2024-04-23 16:19:35.404890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.404907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.404924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.404942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.404960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.404977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.404987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.536 [2024-04-23 16:19:35.405095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.405104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003d80 is same with the state(5) to be set 00:20:36.536 [2024-04-23 16:19:35.405223] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000003d80 was disconnected and freed. reset controller. 00:20:36.536 [2024-04-23 16:19:35.406114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:36.536 task offset: 108800 on job bdev=Nvme0n1 fails 00:20:36.536 00:20:36.536 Latency(us) 00:20:36.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.536 Job: Nvme0n1 ended in about 0.31 seconds with error 00:20:36.536 Verification LBA range: start 0x0 length 0x400 00:20:36.536 Nvme0n1 : 0.31 2672.67 167.04 206.09 0.00 21900.47 6105.20 26076.43 00:20:36.536 =================================================================================================================== 00:20:36.536 Total : 2672.67 167.04 206.09 0.00 21900.47 6105.20 26076.43 00:20:36.536 16:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.536 16:19:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:36.536 16:19:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:36.536 16:19:35 -- common/autotest_common.sh@10 -- # set +x 00:20:36.536 [2024-04-23 16:19:35.408492] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:36.536 [2024-04-23 16:19:35.408528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:20:36.536 [2024-04-23 16:19:35.410307] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:20:36.536 [2024-04-23 16:19:35.410601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:36.536 [2024-04-23 16:19:35.410640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.536 [2024-04-23 16:19:35.410660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:20:36.536 [2024-04-23 16:19:35.410671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:20:36.536 [2024-04-23 16:19:35.410686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:36.536 [2024-04-23 16:19:35.410696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x613000003140 00:20:36.536 [2024-04-23 16:19:35.410723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:20:36.536 [2024-04-23 16:19:35.410739] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:36.536 [2024-04-23 16:19:35.410750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:36.536 [2024-04-23 16:19:35.410761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:36.536 [2024-04-23 16:19:35.410779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.536 16:19:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:36.536 16:19:35 -- target/host_management.sh@87 -- # sleep 1 00:20:37.917 16:19:36 -- target/host_management.sh@91 -- # kill -9 3100374 00:20:37.917 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3100374) - No such process 00:20:37.917 16:19:36 -- target/host_management.sh@91 -- # true 00:20:37.917 16:19:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:37.917 16:19:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:37.917 16:19:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:37.917 16:19:36 -- nvmf/common.sh@520 -- # config=() 00:20:37.917 16:19:36 -- nvmf/common.sh@520 -- # local subsystem config 00:20:37.917 16:19:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:37.917 16:19:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:37.917 { 00:20:37.917 "params": { 00:20:37.917 "name": "Nvme$subsystem", 00:20:37.917 "trtype": "$TEST_TRANSPORT", 00:20:37.917 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.917 "adrfam": "ipv4", 00:20:37.917 "trsvcid": "$NVMF_PORT", 00:20:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.917 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.917 "hdgst": ${hdgst:-false}, 00:20:37.917 "ddgst": ${ddgst:-false} 00:20:37.917 }, 00:20:37.917 "method": "bdev_nvme_attach_controller" 00:20:37.917 } 00:20:37.917 EOF 00:20:37.917 )") 00:20:37.917 16:19:36 -- nvmf/common.sh@542 -- # cat 00:20:37.917 16:19:36 -- nvmf/common.sh@544 -- # jq . 00:20:37.917 16:19:36 -- nvmf/common.sh@545 -- # IFS=, 00:20:37.917 16:19:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:37.917 "params": { 00:20:37.917 "name": "Nvme0", 00:20:37.917 "trtype": "tcp", 00:20:37.917 "traddr": "10.0.0.2", 00:20:37.917 "adrfam": "ipv4", 00:20:37.917 "trsvcid": "4420", 00:20:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.917 "hdgst": false, 00:20:37.917 "ddgst": false 00:20:37.917 }, 00:20:37.917 "method": "bdev_nvme_attach_controller" 00:20:37.917 }' 00:20:37.917 [2024-04-23 16:19:36.509526] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:37.917 [2024-04-23 16:19:36.509686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100697 ] 00:20:37.917 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.917 [2024-04-23 16:19:36.645585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.917 [2024-04-23 16:19:36.745361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.175 Running I/O for 1 seconds... 00:20:39.111 00:20:39.111 Latency(us) 00:20:39.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.111 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.111 Verification LBA range: start 0x0 length 0x400 00:20:39.111 Nvme0n1 : 1.01 3307.99 206.75 0.00 0.00 19135.38 1353.84 32836.99 00:20:39.111 =================================================================================================================== 00:20:39.111 Total : 3307.99 206.75 0.00 0.00 19135.38 1353.84 32836.99 00:20:39.684 16:19:38 -- target/host_management.sh@101 -- # stoptarget 00:20:39.684 16:19:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:39.684 16:19:38 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.684 16:19:38 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.684 16:19:38 -- target/host_management.sh@40 -- # nvmftestfini 00:20:39.684 16:19:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.684 16:19:38 -- nvmf/common.sh@116 -- # sync 00:20:39.684 16:19:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.684 16:19:38 -- nvmf/common.sh@119 -- # set +e 00:20:39.684 16:19:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.684 16:19:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.684 rmmod nvme_tcp 00:20:39.684 rmmod nvme_fabrics 00:20:39.684 rmmod nvme_keyring 00:20:39.684 16:19:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.684 16:19:38 -- nvmf/common.sh@123 -- # set -e 00:20:39.684 16:19:38 -- nvmf/common.sh@124 -- # return 0 00:20:39.684 16:19:38 -- nvmf/common.sh@477 -- # '[' -n 3100043 ']' 00:20:39.684 16:19:38 -- nvmf/common.sh@478 -- # killprocess 3100043 00:20:39.684 16:19:38 -- common/autotest_common.sh@926 -- # '[' -z 3100043 ']' 00:20:39.684 16:19:38 -- common/autotest_common.sh@930 -- # kill -0 3100043 00:20:39.684 16:19:38 -- common/autotest_common.sh@931 -- # uname 00:20:39.684 16:19:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.684 16:19:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3100043 00:20:39.684 16:19:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:39.684 16:19:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:39.684 16:19:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3100043' 00:20:39.684 killing process with pid 3100043 00:20:39.684 16:19:38 -- common/autotest_common.sh@945 -- # kill 3100043 00:20:39.684 16:19:38 -- common/autotest_common.sh@950 -- # wait 3100043 00:20:40.252 [2024-04-23 16:19:38.933258] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:40.252 16:19:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.252 16:19:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.252 16:19:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.252 16:19:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.252 16:19:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.252 16:19:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.252 16:19:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.252 16:19:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.162 16:19:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:42.162 00:20:42.162 real 0m7.441s 00:20:42.162 user 0m22.786s 00:20:42.162 sys 0m1.277s 00:20:42.162 16:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.162 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:20:42.162 ************************************ 00:20:42.162 END TEST nvmf_host_management 00:20:42.162 ************************************ 00:20:42.162 16:19:41 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:42.162 00:20:42.162 real 0m13.002s 00:20:42.162 user 0m24.307s 00:20:42.162 sys 0m5.240s 00:20:42.162 16:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.162 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:20:42.162 ************************************ 00:20:42.162 END TEST nvmf_host_management 00:20:42.162 ************************************ 00:20:42.421 16:19:41 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:42.421 16:19:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:42.421 16:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:42.421 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:20:42.421 ************************************ 00:20:42.421 START TEST nvmf_lvol 00:20:42.421 ************************************ 00:20:42.421 16:19:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:42.421 * Looking for test storage... 00:20:42.421 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.421 16:19:41 -- nvmf/common.sh@7 -- # uname -s 00:20:42.421 16:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.421 16:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.421 16:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.421 16:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.421 16:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.421 16:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.421 16:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.421 16:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.421 16:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.421 16:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.421 16:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:42.421 16:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:42.421 16:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.421 16:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.421 16:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:42.421 16:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:42.421 16:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.421 16:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.421 16:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.421 16:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.421 16:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.421 16:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.421 16:19:41 -- paths/export.sh@5 -- # export PATH 00:20:42.421 16:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.421 16:19:41 -- nvmf/common.sh@46 -- # : 0 00:20:42.421 16:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.421 16:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.421 16:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.421 16:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.421 16:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.421 16:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.421 16:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.421 16:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:20:42.421 16:19:41 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:42.421 16:19:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:42.421 16:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.421 16:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.421 16:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.421 16:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.421 16:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.421 16:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.421 16:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.421 16:19:41 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:20:42.421 16:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:42.421 16:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.421 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:20:47.702 16:19:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:47.702 16:19:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:47.702 16:19:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:47.702 16:19:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:47.702 16:19:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:47.702 16:19:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:47.702 16:19:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:47.702 16:19:46 -- nvmf/common.sh@294 -- # net_devs=() 00:20:47.702 16:19:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:47.702 16:19:46 -- nvmf/common.sh@295 -- # e810=() 00:20:47.702 16:19:46 -- nvmf/common.sh@295 -- # local -ga e810 00:20:47.702 16:19:46 -- nvmf/common.sh@296 -- # x722=() 00:20:47.702 16:19:46 -- nvmf/common.sh@296 -- # local -ga x722 00:20:47.702 16:19:46 -- nvmf/common.sh@297 -- # mlx=() 00:20:47.702 16:19:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:47.702 16:19:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.702 16:19:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.702 16:19:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.702 16:19:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.703 16:19:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:47.703 16:19:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:47.703 16:19:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:47.703 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:47.703 16:19:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:47.703 16:19:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:47.703 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:47.703 16:19:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:47.703 16:19:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.703 16:19:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.703 16:19:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:47.703 Found net devices under 0000:27:00.0: cvl_0_0 00:20:47.703 16:19:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.703 16:19:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:47.703 16:19:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.703 16:19:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.703 16:19:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:47.703 Found net devices under 0000:27:00.1: cvl_0_1 00:20:47.703 16:19:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.703 16:19:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:47.703 16:19:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:47.703 16:19:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:47.703 16:19:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.703 16:19:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.703 16:19:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.703 16:19:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:47.703 16:19:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.703 16:19:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.703 16:19:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:47.703 16:19:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.703 16:19:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.703 16:19:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:47.703 16:19:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:47.703 16:19:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.703 16:19:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.703 16:19:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.703 16:19:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.703 16:19:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:47.703 16:19:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.961 16:19:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.961 16:19:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.961 16:19:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:47.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:20:47.961 00:20:47.961 --- 10.0.0.2 ping statistics --- 00:20:47.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.961 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:20:47.961 16:19:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.474 ms 00:20:47.962 00:20:47.962 --- 10.0.0.1 ping statistics --- 00:20:47.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.962 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:20:47.962 16:19:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.962 16:19:46 -- nvmf/common.sh@410 -- # return 0 00:20:47.962 16:19:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:47.962 16:19:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.962 16:19:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:47.962 16:19:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:47.962 16:19:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.962 16:19:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:47.962 16:19:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:47.962 16:19:46 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:47.962 16:19:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.962 16:19:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:47.962 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:20:47.962 16:19:46 -- nvmf/common.sh@469 -- # nvmfpid=3105201 00:20:47.962 16:19:46 -- nvmf/common.sh@470 -- # waitforlisten 3105201 00:20:47.962 16:19:46 -- common/autotest_common.sh@819 -- # '[' -z 3105201 ']' 00:20:47.962 16:19:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:47.962 16:19:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.962 16:19:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.962 16:19:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.962 16:19:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.962 16:19:46 -- common/autotest_common.sh@10 -- # set +x 00:20:47.962 [2024-04-23 16:19:46.819784] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:20:47.962 [2024-04-23 16:19:46.819892] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.222 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.222 [2024-04-23 16:19:46.940281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.222 [2024-04-23 16:19:47.038463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:48.222 [2024-04-23 16:19:47.038649] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.222 [2024-04-23 16:19:47.038663] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.223 [2024-04-23 16:19:47.038673] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.223 [2024-04-23 16:19:47.038745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.223 [2024-04-23 16:19:47.038839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.223 [2024-04-23 16:19:47.038844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.796 16:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.796 16:19:47 -- common/autotest_common.sh@852 -- # return 0 00:20:48.796 16:19:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:48.796 16:19:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:48.796 16:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:48.796 16:19:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.796 16:19:47 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:48.796 [2024-04-23 16:19:47.710620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.058 16:19:47 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:49.058 16:19:47 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:49.058 16:19:47 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:49.316 16:19:48 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:49.316 16:19:48 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:49.574 16:19:48 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:49.574 16:19:48 -- target/nvmf_lvol.sh@29 -- # lvs=690e4095-edd1-4d8a-9ee9-d71efb7c7ac5 00:20:49.574 16:19:48 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 690e4095-edd1-4d8a-9ee9-d71efb7c7ac5 lvol 20 00:20:49.832 16:19:48 -- target/nvmf_lvol.sh@32 -- # lvol=b2b135f7-64f0-48e0-bb0f-f41361f55764 00:20:49.832 16:19:48 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:49.832 16:19:48 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2b135f7-64f0-48e0-bb0f-f41361f55764 00:20:50.092 16:19:48 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.093 [2024-04-23 16:19:48.918432] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.093 16:19:48 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:50.353 16:19:49 -- target/nvmf_lvol.sh@42 -- # perf_pid=3105596 00:20:50.353 16:19:49 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:50.353 16:19:49 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:50.353 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.400 16:19:50 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b2b135f7-64f0-48e0-bb0f-f41361f55764 MY_SNAPSHOT 00:20:51.400 16:19:50 -- target/nvmf_lvol.sh@47 -- # snapshot=a671cdbd-053f-491a-a25d-b7abc1914619 00:20:51.400 16:19:50 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b2b135f7-64f0-48e0-bb0f-f41361f55764 30 00:20:51.720 16:19:50 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a671cdbd-053f-491a-a25d-b7abc1914619 MY_CLONE 00:20:51.720 16:19:50 -- target/nvmf_lvol.sh@49 -- # clone=dc2bd55e-c39b-4ffd-b1a3-5bfcf35409b8 00:20:51.720 16:19:50 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dc2bd55e-c39b-4ffd-b1a3-5bfcf35409b8 00:20:52.292 16:19:50 -- target/nvmf_lvol.sh@53 -- # wait 3105596 00:21:02.285 Initializing NVMe Controllers 00:21:02.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:02.285 Controller IO queue size 128, less than required. 00:21:02.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:02.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:02.285 Initialization complete. Launching workers. 00:21:02.285 ======================================================== 00:21:02.285 Latency(us) 00:21:02.285 Device Information : IOPS MiB/s Average min max 00:21:02.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13768.59 53.78 9300.92 550.75 80987.96 00:21:02.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13691.39 53.48 9352.69 2595.67 65274.59 00:21:02.285 ======================================================== 00:21:02.285 Total : 27459.99 107.27 9326.74 550.75 80987.96 00:21:02.285 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2b135f7-64f0-48e0-bb0f-f41361f55764 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 690e4095-edd1-4d8a-9ee9-d71efb7c7ac5 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:02.285 16:19:59 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:02.285 16:19:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:02.285 16:19:59 -- nvmf/common.sh@116 -- # sync 00:21:02.285 16:19:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:02.285 16:19:59 -- nvmf/common.sh@119 -- # set +e 00:21:02.285 16:19:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:02.285 16:19:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:02.285 rmmod nvme_tcp 00:21:02.285 rmmod nvme_fabrics 00:21:02.285 rmmod nvme_keyring 00:21:02.285 16:19:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:02.285 16:20:00 -- nvmf/common.sh@123 -- # set -e 00:21:02.285 16:20:00 -- nvmf/common.sh@124 -- # return 0 00:21:02.285 16:20:00 -- nvmf/common.sh@477 -- # '[' -n 3105201 ']' 00:21:02.285 16:20:00 -- nvmf/common.sh@478 -- # killprocess 3105201 00:21:02.285 16:20:00 -- common/autotest_common.sh@926 -- # '[' -z 3105201 ']' 00:21:02.285 16:20:00 -- common/autotest_common.sh@930 -- # kill -0 3105201 00:21:02.285 16:20:00 -- common/autotest_common.sh@931 -- # uname 00:21:02.285 16:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.285 16:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3105201 00:21:02.285 16:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:02.285 16:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:02.285 16:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3105201' 00:21:02.285 killing process with pid 3105201 00:21:02.285 16:20:00 -- common/autotest_common.sh@945 -- # kill 3105201 00:21:02.285 16:20:00 -- common/autotest_common.sh@950 -- # wait 3105201 00:21:02.285 16:20:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:02.285 16:20:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:02.285 16:20:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:02.285 16:20:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.285 16:20:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:02.285 16:20:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.285 16:20:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.285 16:20:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.194 16:20:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:04.194 00:21:04.194 real 0m21.593s 00:21:04.194 user 1m2.739s 00:21:04.194 sys 0m6.343s 00:21:04.194 16:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.194 16:20:02 -- common/autotest_common.sh@10 -- # set +x 00:21:04.194 ************************************ 00:21:04.194 END TEST nvmf_lvol 00:21:04.194 ************************************ 00:21:04.194 16:20:02 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:04.194 16:20:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:04.194 16:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:04.194 16:20:02 -- common/autotest_common.sh@10 -- # set +x 00:21:04.194 ************************************ 00:21:04.194 START TEST nvmf_lvs_grow 00:21:04.194 ************************************ 00:21:04.194 16:20:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:04.194 * Looking for test storage... 00:21:04.194 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:04.194 16:20:02 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.194 16:20:02 -- nvmf/common.sh@7 -- # uname -s 00:21:04.194 16:20:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.194 16:20:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.194 16:20:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.194 16:20:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.194 16:20:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.194 16:20:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.194 16:20:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.194 16:20:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.194 16:20:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.194 16:20:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.194 16:20:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:04.194 16:20:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:04.194 16:20:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.194 16:20:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.194 16:20:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:04.194 16:20:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:04.194 16:20:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.194 16:20:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.194 16:20:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.194 16:20:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.194 16:20:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.194 16:20:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.194 16:20:02 -- paths/export.sh@5 -- # export PATH 00:21:04.194 16:20:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.194 16:20:02 -- nvmf/common.sh@46 -- # : 0 00:21:04.194 16:20:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:04.194 16:20:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:04.194 16:20:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:04.194 16:20:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.194 16:20:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.194 16:20:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:04.194 16:20:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:04.194 16:20:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:04.194 16:20:02 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:04.194 16:20:02 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.194 16:20:02 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:21:04.194 16:20:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:04.194 16:20:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.194 16:20:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:04.194 16:20:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:04.194 16:20:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:04.194 16:20:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.194 16:20:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.194 16:20:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.194 16:20:02 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:04.194 16:20:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:04.194 16:20:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:04.194 16:20:02 -- common/autotest_common.sh@10 -- # set +x 00:21:09.473 16:20:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:09.473 16:20:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:09.473 16:20:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:09.473 16:20:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:09.473 16:20:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:09.473 16:20:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:09.473 16:20:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:09.473 16:20:07 -- nvmf/common.sh@294 -- # net_devs=() 00:21:09.473 16:20:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:09.473 16:20:07 -- nvmf/common.sh@295 -- # e810=() 00:21:09.473 16:20:07 -- nvmf/common.sh@295 -- # local -ga e810 00:21:09.473 16:20:07 -- nvmf/common.sh@296 -- # x722=() 00:21:09.473 16:20:07 -- nvmf/common.sh@296 -- # local -ga x722 00:21:09.473 16:20:07 -- nvmf/common.sh@297 -- # mlx=() 00:21:09.473 16:20:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:09.473 16:20:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.473 16:20:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.473 16:20:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.474 16:20:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:09.474 16:20:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.474 16:20:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:09.474 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:09.474 16:20:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:09.474 16:20:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:09.474 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:09.474 16:20:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.474 16:20:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.474 16:20:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.474 16:20:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:09.474 Found net devices under 0000:27:00.0: cvl_0_0 00:21:09.474 16:20:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.474 16:20:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:09.474 16:20:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.474 16:20:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.474 16:20:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:09.474 Found net devices under 0000:27:00.1: cvl_0_1 00:21:09.474 16:20:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.474 16:20:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:09.474 16:20:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:09.474 16:20:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:09.474 16:20:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.474 16:20:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.474 16:20:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.474 16:20:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:09.474 16:20:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.474 16:20:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.474 16:20:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:09.474 16:20:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.474 16:20:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.474 16:20:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:09.474 16:20:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:09.474 16:20:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.474 16:20:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.474 16:20:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.474 16:20:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.474 16:20:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:09.474 16:20:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.474 16:20:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.474 16:20:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.474 16:20:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:09.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:21:09.474 00:21:09.474 --- 10.0.0.2 ping statistics --- 00:21:09.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.474 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:21:09.474 16:20:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:09.474 00:21:09.474 --- 10.0.0.1 ping statistics --- 00:21:09.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.474 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:09.474 16:20:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.474 16:20:08 -- nvmf/common.sh@410 -- # return 0 00:21:09.474 16:20:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.474 16:20:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.474 16:20:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.474 16:20:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.474 16:20:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.474 16:20:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.474 16:20:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.474 16:20:08 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:21:09.474 16:20:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:09.474 16:20:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:09.474 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 16:20:08 -- nvmf/common.sh@469 -- # nvmfpid=3111818 00:21:09.474 16:20:08 -- nvmf/common.sh@470 -- # waitforlisten 3111818 00:21:09.474 16:20:08 -- common/autotest_common.sh@819 -- # '[' -z 3111818 ']' 00:21:09.474 16:20:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.474 16:20:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:09.474 16:20:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.474 16:20:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:09.474 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:09.474 16:20:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:09.474 [2024-04-23 16:20:08.124024] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:09.474 [2024-04-23 16:20:08.124137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.474 [2024-04-23 16:20:08.244953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.474 [2024-04-23 16:20:08.336538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:09.474 [2024-04-23 16:20:08.336714] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.474 [2024-04-23 16:20:08.336727] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.474 [2024-04-23 16:20:08.336737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.474 [2024-04-23 16:20:08.336762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.042 16:20:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:10.043 16:20:08 -- common/autotest_common.sh@852 -- # return 0 00:21:10.043 16:20:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.043 16:20:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:10.043 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:10.043 16:20:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.043 16:20:08 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.043 [2024-04-23 16:20:08.966596] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:21:10.300 16:20:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:10.300 16:20:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.300 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:10.300 ************************************ 00:21:10.300 START TEST lvs_grow_clean 00:21:10.300 ************************************ 00:21:10.300 16:20:08 -- common/autotest_common.sh@1104 -- # lvs_grow 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:10.300 16:20:08 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:10.300 16:20:09 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:10.300 16:20:09 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:10.560 16:20:09 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e lvol 150 00:21:10.819 16:20:09 -- target/nvmf_lvs_grow.sh@33 -- # lvol=895530e9-5c91-4cf2-8973-cbb734bb25ca 00:21:10.819 16:20:09 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:10.819 16:20:09 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:10.819 [2024-04-23 16:20:09.695410] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:10.819 [2024-04-23 16:20:09.695479] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:10.819 true 00:21:10.820 16:20:09 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:10.820 16:20:09 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:11.078 16:20:09 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:11.078 16:20:09 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:11.079 16:20:10 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 895530e9-5c91-4cf2-8973-cbb734bb25ca 00:21:11.339 16:20:10 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:11.598 [2024-04-23 16:20:10.271868] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.598 16:20:10 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:11.598 16:20:10 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3112177 00:21:11.598 16:20:10 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.598 16:20:10 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3112177 /var/tmp/bdevperf.sock 00:21:11.598 16:20:10 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:11.598 16:20:10 -- common/autotest_common.sh@819 -- # '[' -z 3112177 ']' 00:21:11.598 16:20:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.598 16:20:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:11.598 16:20:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.598 16:20:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:11.598 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:21:11.598 [2024-04-23 16:20:10.488026] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:11.598 [2024-04-23 16:20:10.488138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112177 ] 00:21:11.856 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.856 [2024-04-23 16:20:10.597981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.856 [2024-04-23 16:20:10.686516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.423 16:20:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.423 16:20:11 -- common/autotest_common.sh@852 -- # return 0 00:21:12.423 16:20:11 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:12.683 Nvme0n1 00:21:12.683 16:20:11 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:12.941 [ 00:21:12.941 { 00:21:12.941 "name": "Nvme0n1", 00:21:12.941 "aliases": [ 00:21:12.941 "895530e9-5c91-4cf2-8973-cbb734bb25ca" 00:21:12.941 ], 00:21:12.941 "product_name": "NVMe disk", 00:21:12.941 "block_size": 4096, 00:21:12.941 "num_blocks": 38912, 00:21:12.941 "uuid": "895530e9-5c91-4cf2-8973-cbb734bb25ca", 00:21:12.941 "assigned_rate_limits": { 00:21:12.941 "rw_ios_per_sec": 0, 00:21:12.941 "rw_mbytes_per_sec": 0, 00:21:12.941 "r_mbytes_per_sec": 0, 00:21:12.941 "w_mbytes_per_sec": 0 00:21:12.941 }, 00:21:12.941 "claimed": false, 00:21:12.941 "zoned": false, 00:21:12.941 "supported_io_types": { 00:21:12.941 "read": true, 00:21:12.941 "write": true, 00:21:12.941 "unmap": true, 00:21:12.941 "write_zeroes": true, 00:21:12.941 "flush": true, 00:21:12.941 "reset": true, 00:21:12.941 "compare": true, 00:21:12.941 "compare_and_write": true, 00:21:12.941 "abort": true, 00:21:12.941 "nvme_admin": true, 00:21:12.941 "nvme_io": true 00:21:12.941 }, 00:21:12.941 "driver_specific": { 00:21:12.941 "nvme": [ 00:21:12.941 { 00:21:12.941 "trid": { 00:21:12.941 "trtype": "TCP", 00:21:12.941 "adrfam": "IPv4", 00:21:12.941 "traddr": "10.0.0.2", 00:21:12.941 "trsvcid": "4420", 00:21:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:12.941 }, 00:21:12.941 "ctrlr_data": { 00:21:12.941 "cntlid": 1, 00:21:12.941 "vendor_id": "0x8086", 00:21:12.941 "model_number": "SPDK bdev Controller", 00:21:12.941 "serial_number": "SPDK0", 00:21:12.941 "firmware_revision": "24.01.1", 00:21:12.941 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:12.941 "oacs": { 00:21:12.941 "security": 0, 00:21:12.941 "format": 0, 00:21:12.941 "firmware": 0, 00:21:12.941 "ns_manage": 0 00:21:12.941 }, 00:21:12.941 "multi_ctrlr": true, 00:21:12.941 "ana_reporting": false 00:21:12.941 }, 00:21:12.941 "vs": { 00:21:12.941 "nvme_version": "1.3" 00:21:12.941 }, 00:21:12.941 "ns_data": { 00:21:12.941 "id": 1, 00:21:12.942 "can_share": true 00:21:12.942 } 00:21:12.942 } 00:21:12.942 ], 00:21:12.942 "mp_policy": "active_passive" 00:21:12.942 } 00:21:12.942 } 00:21:12.942 ] 00:21:12.942 16:20:11 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3112478 00:21:12.942 16:20:11 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:12.942 16:20:11 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.942 Running I/O for 10 seconds... 00:21:13.875 Latency(us) 00:21:13.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:13.875 Nvme0n1 : 1.00 23835.00 93.11 0.00 0.00 0.00 0.00 0.00 00:21:13.875 =================================================================================================================== 00:21:13.875 Total : 23835.00 93.11 0.00 0.00 0.00 0.00 0.00 00:21:13.875 00:21:14.816 16:20:13 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:15.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.075 Nvme0n1 : 2.00 24017.00 93.82 0.00 0.00 0.00 0.00 0.00 00:21:15.075 =================================================================================================================== 00:21:15.075 Total : 24017.00 93.82 0.00 0.00 0.00 0.00 0.00 00:21:15.075 00:21:15.075 true 00:21:15.075 16:20:13 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:15.075 16:20:13 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:15.075 16:20:13 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:15.075 16:20:13 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:15.075 16:20:13 -- target/nvmf_lvs_grow.sh@65 -- # wait 3112478 00:21:16.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:16.010 Nvme0n1 : 3.00 24115.67 94.20 0.00 0.00 0.00 0.00 0.00 00:21:16.010 =================================================================================================================== 00:21:16.010 Total : 24115.67 94.20 0.00 0.00 0.00 0.00 0.00 00:21:16.010 00:21:16.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:16.949 Nvme0n1 : 4.00 24102.50 94.15 0.00 0.00 0.00 0.00 0.00 00:21:16.949 =================================================================================================================== 00:21:16.949 Total : 24102.50 94.15 0.00 0.00 0.00 0.00 0.00 00:21:16.949 00:21:17.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.885 Nvme0n1 : 5.00 24149.60 94.33 0.00 0.00 0.00 0.00 0.00 00:21:17.885 =================================================================================================================== 00:21:17.885 Total : 24149.60 94.33 0.00 0.00 0.00 0.00 0.00 00:21:17.885 00:21:19.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:19.263 Nvme0n1 : 6.00 24175.17 94.43 0.00 0.00 0.00 0.00 0.00 00:21:19.263 =================================================================================================================== 00:21:19.263 Total : 24175.17 94.43 0.00 0.00 0.00 0.00 0.00 00:21:19.263 00:21:20.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:20.200 Nvme0n1 : 7.00 24104.43 94.16 0.00 0.00 0.00 0.00 0.00 00:21:20.200 =================================================================================================================== 00:21:20.200 Total : 24104.43 94.16 0.00 0.00 0.00 0.00 0.00 00:21:20.200 00:21:21.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:21.136 Nvme0n1 : 8.00 24115.38 94.20 0.00 0.00 0.00 0.00 0.00 00:21:21.136 =================================================================================================================== 00:21:21.136 Total : 24115.38 94.20 0.00 0.00 0.00 0.00 0.00 00:21:21.136 00:21:22.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:22.073 Nvme0n1 : 9.00 24088.33 94.10 0.00 0.00 0.00 0.00 0.00 00:21:22.073 =================================================================================================================== 00:21:22.073 Total : 24088.33 94.10 0.00 0.00 0.00 0.00 0.00 00:21:22.073 00:21:23.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.009 Nvme0n1 : 10.00 24113.10 94.19 0.00 0.00 0.00 0.00 0.00 00:21:23.009 =================================================================================================================== 00:21:23.009 Total : 24113.10 94.19 0.00 0.00 0.00 0.00 0.00 00:21:23.009 00:21:23.009 00:21:23.009 Latency(us) 00:21:23.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.009 Nvme0n1 : 10.00 24115.82 94.20 0.00 0.00 5304.17 3207.81 18901.96 00:21:23.009 =================================================================================================================== 00:21:23.009 Total : 24115.82 94.20 0.00 0.00 5304.17 3207.81 18901.96 00:21:23.009 0 00:21:23.009 16:20:21 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3112177 00:21:23.009 16:20:21 -- common/autotest_common.sh@926 -- # '[' -z 3112177 ']' 00:21:23.009 16:20:21 -- common/autotest_common.sh@930 -- # kill -0 3112177 00:21:23.009 16:20:21 -- common/autotest_common.sh@931 -- # uname 00:21:23.009 16:20:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:23.009 16:20:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3112177 00:21:23.009 16:20:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:23.009 16:20:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:23.009 16:20:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3112177' 00:21:23.009 killing process with pid 3112177 00:21:23.009 16:20:21 -- common/autotest_common.sh@945 -- # kill 3112177 00:21:23.009 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.009 00:21:23.009 Latency(us) 00:21:23.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.009 =================================================================================================================== 00:21:23.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.009 16:20:21 -- common/autotest_common.sh@950 -- # wait 3112177 00:21:23.268 16:20:22 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:23.526 16:20:22 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:23.526 16:20:22 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:23.786 16:20:22 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:23.786 16:20:22 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:21:23.786 16:20:22 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:23.786 [2024-04-23 16:20:22.591207] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:23.786 16:20:22 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:23.786 16:20:22 -- common/autotest_common.sh@640 -- # local es=0 00:21:23.786 16:20:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:23.786 16:20:22 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:23.786 16:20:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:23.786 16:20:22 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:23.786 16:20:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:23.786 16:20:22 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:23.786 16:20:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:23.786 16:20:22 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:23.786 16:20:22 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:21:23.786 16:20:22 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:24.044 request: 00:21:24.044 { 00:21:24.044 "uuid": "9d825d37-9ffa-4cf2-853b-caf302b5c67e", 00:21:24.044 "method": "bdev_lvol_get_lvstores", 00:21:24.044 "req_id": 1 00:21:24.044 } 00:21:24.044 Got JSON-RPC error response 00:21:24.044 response: 00:21:24.044 { 00:21:24.044 "code": -19, 00:21:24.044 "message": "No such device" 00:21:24.044 } 00:21:24.044 16:20:22 -- common/autotest_common.sh@643 -- # es=1 00:21:24.044 16:20:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:24.044 16:20:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:24.044 16:20:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:24.044 16:20:22 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:24.044 aio_bdev 00:21:24.044 16:20:22 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 895530e9-5c91-4cf2-8973-cbb734bb25ca 00:21:24.044 16:20:22 -- common/autotest_common.sh@887 -- # local bdev_name=895530e9-5c91-4cf2-8973-cbb734bb25ca 00:21:24.044 16:20:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:24.044 16:20:22 -- common/autotest_common.sh@889 -- # local i 00:21:24.044 16:20:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:24.044 16:20:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:24.044 16:20:22 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:24.301 16:20:23 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 895530e9-5c91-4cf2-8973-cbb734bb25ca -t 2000 00:21:24.301 [ 00:21:24.301 { 00:21:24.301 "name": "895530e9-5c91-4cf2-8973-cbb734bb25ca", 00:21:24.301 "aliases": [ 00:21:24.301 "lvs/lvol" 00:21:24.301 ], 00:21:24.301 "product_name": "Logical Volume", 00:21:24.301 "block_size": 4096, 00:21:24.301 "num_blocks": 38912, 00:21:24.301 "uuid": "895530e9-5c91-4cf2-8973-cbb734bb25ca", 00:21:24.301 "assigned_rate_limits": { 00:21:24.301 "rw_ios_per_sec": 0, 00:21:24.301 "rw_mbytes_per_sec": 0, 00:21:24.301 "r_mbytes_per_sec": 0, 00:21:24.301 "w_mbytes_per_sec": 0 00:21:24.301 }, 00:21:24.301 "claimed": false, 00:21:24.301 "zoned": false, 00:21:24.301 "supported_io_types": { 00:21:24.301 "read": true, 00:21:24.301 "write": true, 00:21:24.301 "unmap": true, 00:21:24.301 "write_zeroes": true, 00:21:24.301 "flush": false, 00:21:24.301 "reset": true, 00:21:24.301 "compare": false, 00:21:24.301 "compare_and_write": false, 00:21:24.301 "abort": false, 00:21:24.301 "nvme_admin": false, 00:21:24.301 "nvme_io": false 00:21:24.301 }, 00:21:24.301 "driver_specific": { 00:21:24.301 "lvol": { 00:21:24.301 "lvol_store_uuid": "9d825d37-9ffa-4cf2-853b-caf302b5c67e", 00:21:24.301 "base_bdev": "aio_bdev", 00:21:24.301 "thin_provision": false, 00:21:24.301 "snapshot": false, 00:21:24.301 "clone": false, 00:21:24.301 "esnap_clone": false 00:21:24.301 } 00:21:24.301 } 00:21:24.301 } 00:21:24.301 ] 00:21:24.301 16:20:23 -- common/autotest_common.sh@895 -- # return 0 00:21:24.301 16:20:23 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:24.301 16:20:23 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:24.562 16:20:23 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:24.562 16:20:23 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:24.562 16:20:23 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:24.562 16:20:23 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:24.562 16:20:23 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 895530e9-5c91-4cf2-8973-cbb734bb25ca 00:21:24.822 16:20:23 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d825d37-9ffa-4cf2-853b-caf302b5c67e 00:21:24.822 16:20:23 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:25.081 00:21:25.081 real 0m14.870s 00:21:25.081 user 0m14.530s 00:21:25.081 sys 0m1.172s 00:21:25.081 16:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.081 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:21:25.081 ************************************ 00:21:25.081 END TEST lvs_grow_clean 00:21:25.081 ************************************ 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:25.081 16:20:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:25.081 16:20:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:25.081 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:21:25.081 ************************************ 00:21:25.081 START TEST lvs_grow_dirty 00:21:25.081 ************************************ 00:21:25.081 16:20:23 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:25.081 16:20:23 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:25.340 16:20:24 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:25.340 16:20:24 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:25.340 16:20:24 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:25.340 16:20:24 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:25.340 16:20:24 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b lvol 150 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:25.598 16:20:24 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:25.857 [2024-04-23 16:20:24.591489] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:25.857 [2024-04-23 16:20:24.591561] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:25.857 true 00:21:25.857 16:20:24 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:25.857 16:20:24 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:25.857 16:20:24 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:25.857 16:20:24 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:26.116 16:20:24 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:26.116 16:20:25 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:26.375 16:20:25 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:26.375 16:20:25 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3115215 00:21:26.375 16:20:25 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.375 16:20:25 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:26.375 16:20:25 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3115215 /var/tmp/bdevperf.sock 00:21:26.375 16:20:25 -- common/autotest_common.sh@819 -- # '[' -z 3115215 ']' 00:21:26.375 16:20:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.375 16:20:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:26.375 16:20:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.375 16:20:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:26.375 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:21:26.634 [2024-04-23 16:20:25.325032] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:26.634 [2024-04-23 16:20:25.325113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115215 ] 00:21:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.634 [2024-04-23 16:20:25.413320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.634 [2024-04-23 16:20:25.508361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.200 16:20:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:27.200 16:20:26 -- common/autotest_common.sh@852 -- # return 0 00:21:27.200 16:20:26 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:27.459 Nvme0n1 00:21:27.459 16:20:26 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:27.719 [ 00:21:27.719 { 00:21:27.719 "name": "Nvme0n1", 00:21:27.720 "aliases": [ 00:21:27.720 "6c820418-bcd7-4fdc-8dd8-d208f876a4dd" 00:21:27.720 ], 00:21:27.720 "product_name": "NVMe disk", 00:21:27.720 "block_size": 4096, 00:21:27.720 "num_blocks": 38912, 00:21:27.720 "uuid": "6c820418-bcd7-4fdc-8dd8-d208f876a4dd", 00:21:27.720 "assigned_rate_limits": { 00:21:27.720 "rw_ios_per_sec": 0, 00:21:27.720 "rw_mbytes_per_sec": 0, 00:21:27.720 "r_mbytes_per_sec": 0, 00:21:27.720 "w_mbytes_per_sec": 0 00:21:27.720 }, 00:21:27.720 "claimed": false, 00:21:27.720 "zoned": false, 00:21:27.720 "supported_io_types": { 00:21:27.720 "read": true, 00:21:27.720 "write": true, 00:21:27.720 "unmap": true, 00:21:27.720 "write_zeroes": true, 00:21:27.720 "flush": true, 00:21:27.720 "reset": true, 00:21:27.720 "compare": true, 00:21:27.720 "compare_and_write": true, 00:21:27.720 "abort": true, 00:21:27.720 "nvme_admin": true, 00:21:27.720 "nvme_io": true 00:21:27.720 }, 00:21:27.720 "driver_specific": { 00:21:27.720 "nvme": [ 00:21:27.720 { 00:21:27.720 "trid": { 00:21:27.720 "trtype": "TCP", 00:21:27.720 "adrfam": "IPv4", 00:21:27.720 "traddr": "10.0.0.2", 00:21:27.720 "trsvcid": "4420", 00:21:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:27.720 }, 00:21:27.720 "ctrlr_data": { 00:21:27.720 "cntlid": 1, 00:21:27.720 "vendor_id": "0x8086", 00:21:27.720 "model_number": "SPDK bdev Controller", 00:21:27.720 "serial_number": "SPDK0", 00:21:27.720 "firmware_revision": "24.01.1", 00:21:27.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.720 "oacs": { 00:21:27.720 "security": 0, 00:21:27.720 "format": 0, 00:21:27.720 "firmware": 0, 00:21:27.720 "ns_manage": 0 00:21:27.720 }, 00:21:27.720 "multi_ctrlr": true, 00:21:27.720 "ana_reporting": false 00:21:27.720 }, 00:21:27.720 "vs": { 00:21:27.720 "nvme_version": "1.3" 00:21:27.720 }, 00:21:27.720 "ns_data": { 00:21:27.720 "id": 1, 00:21:27.720 "can_share": true 00:21:27.720 } 00:21:27.720 } 00:21:27.720 ], 00:21:27.720 "mp_policy": "active_passive" 00:21:27.720 } 00:21:27.720 } 00:21:27.720 ] 00:21:27.720 16:20:26 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3115514 00:21:27.720 16:20:26 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:27.720 16:20:26 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.720 Running I/O for 10 seconds... 00:21:28.660 Latency(us) 00:21:28.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.660 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:21:28.660 =================================================================================================================== 00:21:28.660 Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:21:28.660 00:21:29.604 16:20:28 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:29.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.864 Nvme0n1 : 2.00 23653.50 92.40 0.00 0.00 0.00 0.00 0.00 00:21:29.864 =================================================================================================================== 00:21:29.864 Total : 23653.50 92.40 0.00 0.00 0.00 0.00 0.00 00:21:29.864 00:21:29.864 true 00:21:29.864 16:20:28 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:29.864 16:20:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:30.123 16:20:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:30.123 16:20:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:30.123 16:20:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 3115514 00:21:30.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.691 Nvme0n1 : 3.00 23768.67 92.85 0.00 0.00 0.00 0.00 0.00 00:21:30.691 =================================================================================================================== 00:21:30.691 Total : 23768.67 92.85 0.00 0.00 0.00 0.00 0.00 00:21:30.691 00:21:31.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:31.630 Nvme0n1 : 4.00 23842.75 93.14 0.00 0.00 0.00 0.00 0.00 00:21:31.630 =================================================================================================================== 00:21:31.630 Total : 23842.75 93.14 0.00 0.00 0.00 0.00 0.00 00:21:31.630 00:21:33.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:33.007 Nvme0n1 : 5.00 23925.00 93.46 0.00 0.00 0.00 0.00 0.00 00:21:33.007 =================================================================================================================== 00:21:33.007 Total : 23925.00 93.46 0.00 0.00 0.00 0.00 0.00 00:21:33.007 00:21:33.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:33.950 Nvme0n1 : 6.00 23969.67 93.63 0.00 0.00 0.00 0.00 0.00 00:21:33.950 =================================================================================================================== 00:21:33.950 Total : 23969.67 93.63 0.00 0.00 0.00 0.00 0.00 00:21:33.950 00:21:34.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.914 Nvme0n1 : 7.00 24028.71 93.86 0.00 0.00 0.00 0.00 0.00 00:21:34.914 =================================================================================================================== 00:21:34.914 Total : 24028.71 93.86 0.00 0.00 0.00 0.00 0.00 00:21:34.914 00:21:35.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:35.857 Nvme0n1 : 8.00 24041.25 93.91 0.00 0.00 0.00 0.00 0.00 00:21:35.857 =================================================================================================================== 00:21:35.857 Total : 24041.25 93.91 0.00 0.00 0.00 0.00 0.00 00:21:35.857 00:21:36.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:36.823 Nvme0n1 : 9.00 23979.67 93.67 0.00 0.00 0.00 0.00 0.00 00:21:36.823 =================================================================================================================== 00:21:36.823 Total : 23979.67 93.67 0.00 0.00 0.00 0.00 0.00 00:21:36.823 00:21:37.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:37.765 Nvme0n1 : 10.00 24013.70 93.80 0.00 0.00 0.00 0.00 0.00 00:21:37.765 =================================================================================================================== 00:21:37.765 Total : 24013.70 93.80 0.00 0.00 0.00 0.00 0.00 00:21:37.765 00:21:37.765 00:21:37.765 Latency(us) 00:21:37.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:37.765 Nvme0n1 : 10.00 24012.17 93.80 0.00 0.00 5328.09 3276.80 18626.02 00:21:37.765 =================================================================================================================== 00:21:37.765 Total : 24012.17 93.80 0.00 0.00 5328.09 3276.80 18626.02 00:21:37.765 0 00:21:37.765 16:20:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3115215 00:21:37.765 16:20:36 -- common/autotest_common.sh@926 -- # '[' -z 3115215 ']' 00:21:37.765 16:20:36 -- common/autotest_common.sh@930 -- # kill -0 3115215 00:21:37.765 16:20:36 -- common/autotest_common.sh@931 -- # uname 00:21:37.765 16:20:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.765 16:20:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3115215 00:21:37.765 16:20:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.765 16:20:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.765 16:20:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3115215' 00:21:37.765 killing process with pid 3115215 00:21:37.765 16:20:36 -- common/autotest_common.sh@945 -- # kill 3115215 00:21:37.765 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.765 00:21:37.765 Latency(us) 00:21:37.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.765 =================================================================================================================== 00:21:37.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.765 16:20:36 -- common/autotest_common.sh@950 -- # wait 3115215 00:21:38.108 16:20:36 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3111818 00:21:38.367 16:20:37 -- target/nvmf_lvs_grow.sh@74 -- # wait 3111818 00:21:38.628 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3111818 Killed "${NVMF_APP[@]}" "$@" 00:21:38.628 16:20:37 -- target/nvmf_lvs_grow.sh@74 -- # true 00:21:38.628 16:20:37 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:21:38.628 16:20:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:38.628 16:20:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:38.628 16:20:37 -- common/autotest_common.sh@10 -- # set +x 00:21:38.628 16:20:37 -- nvmf/common.sh@469 -- # nvmfpid=3117621 00:21:38.628 16:20:37 -- nvmf/common.sh@470 -- # waitforlisten 3117621 00:21:38.628 16:20:37 -- common/autotest_common.sh@819 -- # '[' -z 3117621 ']' 00:21:38.628 16:20:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.628 16:20:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:38.628 16:20:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.628 16:20:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:38.628 16:20:37 -- common/autotest_common.sh@10 -- # set +x 00:21:38.628 16:20:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:38.628 [2024-04-23 16:20:37.399236] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:38.628 [2024-04-23 16:20:37.399343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.628 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.628 [2024-04-23 16:20:37.525111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.889 [2024-04-23 16:20:37.620342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:38.889 [2024-04-23 16:20:37.620513] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.889 [2024-04-23 16:20:37.620526] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.889 [2024-04-23 16:20:37.620535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.889 [2024-04-23 16:20:37.620563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.460 16:20:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:39.460 16:20:38 -- common/autotest_common.sh@852 -- # return 0 00:21:39.460 16:20:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:39.460 16:20:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:39.460 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:21:39.460 16:20:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.460 16:20:38 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:39.460 [2024-04-23 16:20:38.244984] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:39.460 [2024-04-23 16:20:38.245114] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:39.460 [2024-04-23 16:20:38.245146] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:39.460 16:20:38 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:21:39.460 16:20:38 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:39.460 16:20:38 -- common/autotest_common.sh@887 -- # local bdev_name=6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:39.460 16:20:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:39.460 16:20:38 -- common/autotest_common.sh@889 -- # local i 00:21:39.460 16:20:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:39.460 16:20:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:39.460 16:20:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:39.719 16:20:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c820418-bcd7-4fdc-8dd8-d208f876a4dd -t 2000 00:21:39.719 [ 00:21:39.719 { 00:21:39.719 "name": "6c820418-bcd7-4fdc-8dd8-d208f876a4dd", 00:21:39.719 "aliases": [ 00:21:39.719 "lvs/lvol" 00:21:39.719 ], 00:21:39.719 "product_name": "Logical Volume", 00:21:39.719 "block_size": 4096, 00:21:39.719 "num_blocks": 38912, 00:21:39.719 "uuid": "6c820418-bcd7-4fdc-8dd8-d208f876a4dd", 00:21:39.719 "assigned_rate_limits": { 00:21:39.719 "rw_ios_per_sec": 0, 00:21:39.719 "rw_mbytes_per_sec": 0, 00:21:39.719 "r_mbytes_per_sec": 0, 00:21:39.719 "w_mbytes_per_sec": 0 00:21:39.719 }, 00:21:39.719 "claimed": false, 00:21:39.719 "zoned": false, 00:21:39.719 "supported_io_types": { 00:21:39.719 "read": true, 00:21:39.719 "write": true, 00:21:39.719 "unmap": true, 00:21:39.719 "write_zeroes": true, 00:21:39.719 "flush": false, 00:21:39.719 "reset": true, 00:21:39.719 "compare": false, 00:21:39.719 "compare_and_write": false, 00:21:39.719 "abort": false, 00:21:39.719 "nvme_admin": false, 00:21:39.719 "nvme_io": false 00:21:39.719 }, 00:21:39.719 "driver_specific": { 00:21:39.719 "lvol": { 00:21:39.719 "lvol_store_uuid": "8bc723b3-d527-4d5d-818b-a4d5d072ee3b", 00:21:39.719 "base_bdev": "aio_bdev", 00:21:39.719 "thin_provision": false, 00:21:39.719 "snapshot": false, 00:21:39.719 "clone": false, 00:21:39.719 "esnap_clone": false 00:21:39.719 } 00:21:39.719 } 00:21:39.719 } 00:21:39.719 ] 00:21:39.719 16:20:38 -- common/autotest_common.sh@895 -- # return 0 00:21:39.719 16:20:38 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:39.719 16:20:38 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:21:39.977 16:20:38 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:21:39.977 16:20:38 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:39.977 16:20:38 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:21:39.977 16:20:38 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:21:39.977 16:20:38 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:40.235 [2024-04-23 16:20:38.918960] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:40.235 16:20:38 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:40.235 16:20:38 -- common/autotest_common.sh@640 -- # local es=0 00:21:40.235 16:20:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:40.235 16:20:38 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:40.235 16:20:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.235 16:20:38 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:40.236 16:20:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.236 16:20:38 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:40.236 16:20:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:40.236 16:20:38 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:40.236 16:20:38 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:21:40.236 16:20:38 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:40.236 request: 00:21:40.236 { 00:21:40.236 "uuid": "8bc723b3-d527-4d5d-818b-a4d5d072ee3b", 00:21:40.236 "method": "bdev_lvol_get_lvstores", 00:21:40.236 "req_id": 1 00:21:40.236 } 00:21:40.236 Got JSON-RPC error response 00:21:40.236 response: 00:21:40.236 { 00:21:40.236 "code": -19, 00:21:40.236 "message": "No such device" 00:21:40.236 } 00:21:40.236 16:20:39 -- common/autotest_common.sh@643 -- # es=1 00:21:40.236 16:20:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:40.236 16:20:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:40.236 16:20:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:40.236 16:20:39 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:40.495 aio_bdev 00:21:40.495 16:20:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:40.495 16:20:39 -- common/autotest_common.sh@887 -- # local bdev_name=6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:40.495 16:20:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:21:40.495 16:20:39 -- common/autotest_common.sh@889 -- # local i 00:21:40.495 16:20:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:21:40.495 16:20:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:21:40.495 16:20:39 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:40.495 16:20:39 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c820418-bcd7-4fdc-8dd8-d208f876a4dd -t 2000 00:21:40.757 [ 00:21:40.757 { 00:21:40.757 "name": "6c820418-bcd7-4fdc-8dd8-d208f876a4dd", 00:21:40.757 "aliases": [ 00:21:40.757 "lvs/lvol" 00:21:40.757 ], 00:21:40.757 "product_name": "Logical Volume", 00:21:40.757 "block_size": 4096, 00:21:40.757 "num_blocks": 38912, 00:21:40.757 "uuid": "6c820418-bcd7-4fdc-8dd8-d208f876a4dd", 00:21:40.757 "assigned_rate_limits": { 00:21:40.757 "rw_ios_per_sec": 0, 00:21:40.757 "rw_mbytes_per_sec": 0, 00:21:40.757 "r_mbytes_per_sec": 0, 00:21:40.757 "w_mbytes_per_sec": 0 00:21:40.757 }, 00:21:40.757 "claimed": false, 00:21:40.757 "zoned": false, 00:21:40.757 "supported_io_types": { 00:21:40.757 "read": true, 00:21:40.757 "write": true, 00:21:40.757 "unmap": true, 00:21:40.757 "write_zeroes": true, 00:21:40.757 "flush": false, 00:21:40.757 "reset": true, 00:21:40.757 "compare": false, 00:21:40.757 "compare_and_write": false, 00:21:40.757 "abort": false, 00:21:40.757 "nvme_admin": false, 00:21:40.757 "nvme_io": false 00:21:40.757 }, 00:21:40.757 "driver_specific": { 00:21:40.757 "lvol": { 00:21:40.757 "lvol_store_uuid": "8bc723b3-d527-4d5d-818b-a4d5d072ee3b", 00:21:40.757 "base_bdev": "aio_bdev", 00:21:40.757 "thin_provision": false, 00:21:40.757 "snapshot": false, 00:21:40.757 "clone": false, 00:21:40.757 "esnap_clone": false 00:21:40.757 } 00:21:40.757 } 00:21:40.757 } 00:21:40.757 ] 00:21:40.757 16:20:39 -- common/autotest_common.sh@895 -- # return 0 00:21:40.757 16:20:39 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:40.757 16:20:39 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:40.757 16:20:39 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:40.757 16:20:39 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:40.757 16:20:39 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:41.018 16:20:39 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:41.018 16:20:39 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c820418-bcd7-4fdc-8dd8-d208f876a4dd 00:21:41.018 16:20:39 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8bc723b3-d527-4d5d-818b-a4d5d072ee3b 00:21:41.282 16:20:40 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:41.542 16:20:40 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:41.542 00:21:41.542 real 0m16.385s 00:21:41.542 user 0m42.564s 00:21:41.542 sys 0m3.082s 00:21:41.542 16:20:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.542 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:21:41.542 ************************************ 00:21:41.542 END TEST lvs_grow_dirty 00:21:41.542 ************************************ 00:21:41.542 16:20:40 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:41.542 16:20:40 -- common/autotest_common.sh@796 -- # type=--id 00:21:41.542 16:20:40 -- common/autotest_common.sh@797 -- # id=0 00:21:41.542 16:20:40 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:41.542 16:20:40 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:41.542 16:20:40 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:41.542 16:20:40 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:41.542 16:20:40 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:41.542 16:20:40 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:41.542 nvmf_trace.0 00:21:41.542 16:20:40 -- common/autotest_common.sh@811 -- # return 0 00:21:41.542 16:20:40 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:41.542 16:20:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:41.542 16:20:40 -- nvmf/common.sh@116 -- # sync 00:21:41.542 16:20:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:41.542 16:20:40 -- nvmf/common.sh@119 -- # set +e 00:21:41.542 16:20:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:41.542 16:20:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:41.542 rmmod nvme_tcp 00:21:41.542 rmmod nvme_fabrics 00:21:41.542 rmmod nvme_keyring 00:21:41.542 16:20:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:41.542 16:20:40 -- nvmf/common.sh@123 -- # set -e 00:21:41.542 16:20:40 -- nvmf/common.sh@124 -- # return 0 00:21:41.542 16:20:40 -- nvmf/common.sh@477 -- # '[' -n 3117621 ']' 00:21:41.542 16:20:40 -- nvmf/common.sh@478 -- # killprocess 3117621 00:21:41.542 16:20:40 -- common/autotest_common.sh@926 -- # '[' -z 3117621 ']' 00:21:41.542 16:20:40 -- common/autotest_common.sh@930 -- # kill -0 3117621 00:21:41.542 16:20:40 -- common/autotest_common.sh@931 -- # uname 00:21:41.542 16:20:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:41.542 16:20:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3117621 00:21:41.800 16:20:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:41.800 16:20:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:41.801 16:20:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3117621' 00:21:41.801 killing process with pid 3117621 00:21:41.801 16:20:40 -- common/autotest_common.sh@945 -- # kill 3117621 00:21:41.801 16:20:40 -- common/autotest_common.sh@950 -- # wait 3117621 00:21:42.059 16:20:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:42.059 16:20:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:42.059 16:20:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:42.059 16:20:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.059 16:20:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:42.059 16:20:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.059 16:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.059 16:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.603 16:20:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:44.603 00:21:44.603 real 0m40.249s 00:21:44.603 user 1m2.222s 00:21:44.603 sys 0m8.541s 00:21:44.603 16:20:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.603 16:20:42 -- common/autotest_common.sh@10 -- # set +x 00:21:44.603 ************************************ 00:21:44.603 END TEST nvmf_lvs_grow 00:21:44.603 ************************************ 00:21:44.603 16:20:43 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:44.603 16:20:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:44.603 16:20:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:44.603 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:21:44.603 ************************************ 00:21:44.603 START TEST nvmf_bdev_io_wait 00:21:44.603 ************************************ 00:21:44.603 16:20:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:44.603 * Looking for test storage... 00:21:44.603 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:44.603 16:20:43 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.603 16:20:43 -- nvmf/common.sh@7 -- # uname -s 00:21:44.603 16:20:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.603 16:20:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.603 16:20:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.603 16:20:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.603 16:20:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.603 16:20:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.603 16:20:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.603 16:20:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.603 16:20:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.603 16:20:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.603 16:20:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:44.603 16:20:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:44.603 16:20:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.603 16:20:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.603 16:20:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:44.603 16:20:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:44.603 16:20:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.603 16:20:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.603 16:20:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.603 16:20:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.603 16:20:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.603 16:20:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.603 16:20:43 -- paths/export.sh@5 -- # export PATH 00:21:44.603 16:20:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.603 16:20:43 -- nvmf/common.sh@46 -- # : 0 00:21:44.603 16:20:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:44.603 16:20:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:44.603 16:20:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:44.603 16:20:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.603 16:20:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.603 16:20:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:44.603 16:20:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:44.603 16:20:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:44.603 16:20:43 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.603 16:20:43 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.603 16:20:43 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:44.603 16:20:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:44.603 16:20:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.603 16:20:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:44.603 16:20:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:44.603 16:20:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:44.603 16:20:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.603 16:20:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.603 16:20:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.603 16:20:43 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:44.603 16:20:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:44.603 16:20:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:44.603 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:21:49.881 16:20:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:49.881 16:20:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:49.881 16:20:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:49.881 16:20:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:49.881 16:20:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:49.881 16:20:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:49.881 16:20:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:49.881 16:20:47 -- nvmf/common.sh@294 -- # net_devs=() 00:21:49.881 16:20:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:49.881 16:20:47 -- nvmf/common.sh@295 -- # e810=() 00:21:49.881 16:20:47 -- nvmf/common.sh@295 -- # local -ga e810 00:21:49.881 16:20:47 -- nvmf/common.sh@296 -- # x722=() 00:21:49.881 16:20:47 -- nvmf/common.sh@296 -- # local -ga x722 00:21:49.881 16:20:47 -- nvmf/common.sh@297 -- # mlx=() 00:21:49.881 16:20:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:49.881 16:20:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.881 16:20:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:49.881 16:20:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.881 16:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:49.881 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:49.881 16:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.881 16:20:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:49.881 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:49.881 16:20:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.881 16:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.881 16:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.881 16:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:49.881 Found net devices under 0000:27:00.0: cvl_0_0 00:21:49.881 16:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.881 16:20:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.881 16:20:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.881 16:20:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.881 16:20:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:49.881 Found net devices under 0000:27:00.1: cvl_0_1 00:21:49.881 16:20:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.881 16:20:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:49.881 16:20:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:49.881 16:20:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:49.881 16:20:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.881 16:20:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.881 16:20:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.881 16:20:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:49.881 16:20:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.881 16:20:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.881 16:20:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:49.881 16:20:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.881 16:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.881 16:20:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:49.881 16:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:49.881 16:20:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.881 16:20:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.881 16:20:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.881 16:20:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.881 16:20:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:49.881 16:20:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.881 16:20:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.881 16:20:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.881 16:20:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:49.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:21:49.881 00:21:49.881 --- 10.0.0.2 ping statistics --- 00:21:49.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.881 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:21:49.881 16:20:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.494 ms 00:21:49.881 00:21:49.881 --- 10.0.0.1 ping statistics --- 00:21:49.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.881 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:21:49.881 16:20:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.881 16:20:48 -- nvmf/common.sh@410 -- # return 0 00:21:49.881 16:20:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:49.881 16:20:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.881 16:20:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:49.881 16:20:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:49.881 16:20:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.881 16:20:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:49.881 16:20:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:49.881 16:20:48 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:49.881 16:20:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:49.881 16:20:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:49.881 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:21:49.881 16:20:48 -- nvmf/common.sh@469 -- # nvmfpid=3122166 00:21:49.881 16:20:48 -- nvmf/common.sh@470 -- # waitforlisten 3122166 00:21:49.881 16:20:48 -- common/autotest_common.sh@819 -- # '[' -z 3122166 ']' 00:21:49.881 16:20:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.881 16:20:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.881 16:20:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.882 16:20:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.882 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:21:49.882 16:20:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:49.882 [2024-04-23 16:20:48.220744] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:49.882 [2024-04-23 16:20:48.220850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.882 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.882 [2024-04-23 16:20:48.341838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.882 [2024-04-23 16:20:48.441295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:49.882 [2024-04-23 16:20:48.441472] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.882 [2024-04-23 16:20:48.441486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.882 [2024-04-23 16:20:48.441496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.882 [2024-04-23 16:20:48.441655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.882 [2024-04-23 16:20:48.441742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.882 [2024-04-23 16:20:48.441843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.882 [2024-04-23 16:20:48.441852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.143 16:20:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:50.143 16:20:48 -- common/autotest_common.sh@852 -- # return 0 00:21:50.143 16:20:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:50.143 16:20:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:50.143 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:21:50.143 16:20:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.143 16:20:48 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:50.143 16:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.143 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:21:50.143 16:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.143 16:20:48 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:50.143 16:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.143 16:20:48 -- common/autotest_common.sh@10 -- # set +x 00:21:50.402 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.402 16:20:49 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.402 16:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.402 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.402 [2024-04-23 16:20:49.084091] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.402 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.403 16:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.403 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.403 Malloc0 00:21:50.403 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.403 16:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.403 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.403 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.403 16:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.403 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.403 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.403 16:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.403 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:21:50.403 [2024-04-23 16:20:49.164976] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.403 16:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3122366 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@30 -- # READ_PID=3122369 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # config=() 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.403 16:20:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.403 { 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme$subsystem", 00:21:50.403 "trtype": "$TEST_TRANSPORT", 00:21:50.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "$NVMF_PORT", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.403 "hdgst": ${hdgst:-false}, 00:21:50.403 "ddgst": ${ddgst:-false} 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 } 00:21:50.403 EOF 00:21:50.403 )") 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3122371 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3122375 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@35 -- # sync 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # config=() 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:50.403 16:20:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.403 { 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme$subsystem", 00:21:50.403 "trtype": "$TEST_TRANSPORT", 00:21:50.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "$NVMF_PORT", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.403 "hdgst": ${hdgst:-false}, 00:21:50.403 "ddgst": ${ddgst:-false} 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 } 00:21:50.403 EOF 00:21:50.403 )") 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # config=() 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # cat 00:21:50.403 16:20:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.403 { 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme$subsystem", 00:21:50.403 "trtype": "$TEST_TRANSPORT", 00:21:50.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "$NVMF_PORT", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.403 "hdgst": ${hdgst:-false}, 00:21:50.403 "ddgst": ${ddgst:-false} 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 } 00:21:50.403 EOF 00:21:50.403 )") 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # config=() 00:21:50.403 16:20:49 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.403 16:20:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.403 { 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme$subsystem", 00:21:50.403 "trtype": "$TEST_TRANSPORT", 00:21:50.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "$NVMF_PORT", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.403 "hdgst": ${hdgst:-false}, 00:21:50.403 "ddgst": ${ddgst:-false} 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 } 00:21:50.403 EOF 00:21:50.403 )") 00:21:50.403 16:20:49 -- target/bdev_io_wait.sh@37 -- # wait 3122366 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # cat 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # cat 00:21:50.403 16:20:49 -- nvmf/common.sh@542 -- # cat 00:21:50.403 16:20:49 -- nvmf/common.sh@544 -- # jq . 00:21:50.403 16:20:49 -- nvmf/common.sh@544 -- # jq . 00:21:50.403 16:20:49 -- nvmf/common.sh@544 -- # jq . 00:21:50.403 16:20:49 -- nvmf/common.sh@544 -- # jq . 00:21:50.403 16:20:49 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.403 16:20:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme1", 00:21:50.403 "trtype": "tcp", 00:21:50.403 "traddr": "10.0.0.2", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "4420", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.403 "hdgst": false, 00:21:50.403 "ddgst": false 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 }' 00:21:50.403 16:20:49 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.403 16:20:49 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.403 16:20:49 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.403 16:20:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme1", 00:21:50.403 "trtype": "tcp", 00:21:50.403 "traddr": "10.0.0.2", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "4420", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.403 "hdgst": false, 00:21:50.403 "ddgst": false 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 }' 00:21:50.403 16:20:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme1", 00:21:50.403 "trtype": "tcp", 00:21:50.403 "traddr": "10.0.0.2", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "4420", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.403 "hdgst": false, 00:21:50.403 "ddgst": false 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 }' 00:21:50.403 16:20:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.403 "params": { 00:21:50.403 "name": "Nvme1", 00:21:50.403 "trtype": "tcp", 00:21:50.403 "traddr": "10.0.0.2", 00:21:50.403 "adrfam": "ipv4", 00:21:50.403 "trsvcid": "4420", 00:21:50.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.403 "hdgst": false, 00:21:50.403 "ddgst": false 00:21:50.403 }, 00:21:50.403 "method": "bdev_nvme_attach_controller" 00:21:50.403 }' 00:21:50.404 [2024-04-23 16:20:49.230559] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:50.404 [2024-04-23 16:20:49.230648] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:50.404 [2024-04-23 16:20:49.230740] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:50.404 [2024-04-23 16:20:49.230818] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:50.404 [2024-04-23 16:20:49.244012] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:50.404 [2024-04-23 16:20:49.244128] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:50.404 [2024-04-23 16:20:49.252330] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:21:50.404 [2024-04-23 16:20:49.252462] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:50.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.665 [2024-04-23 16:20:49.409861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.665 [2024-04-23 16:20:49.454541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.665 [2024-04-23 16:20:49.534716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.665 [2024-04-23 16:20:49.545901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:50.665 [2024-04-23 16:20:49.591468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:50.923 [2024-04-23 16:20:49.615408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.923 [2024-04-23 16:20:49.661745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.923 [2024-04-23 16:20:49.751907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:51.182 Running I/O for 1 seconds... 00:21:51.182 Running I/O for 1 seconds... 00:21:51.182 Running I/O for 1 seconds... 00:21:51.182 Running I/O for 1 seconds... 00:21:52.118 00:21:52.118 Latency(us) 00:21:52.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.118 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:52.119 Nvme1n1 : 1.00 13049.03 50.97 0.00 0.00 9779.89 4656.51 17315.30 00:21:52.119 =================================================================================================================== 00:21:52.119 Total : 13049.03 50.97 0.00 0.00 9779.89 4656.51 17315.30 00:21:52.119 00:21:52.119 Latency(us) 00:21:52.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.119 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:52.119 Nvme1n1 : 1.01 11980.02 46.80 0.00 0.00 10649.51 5932.73 19315.87 00:21:52.119 =================================================================================================================== 00:21:52.119 Total : 11980.02 46.80 0.00 0.00 10649.51 5932.73 19315.87 00:21:52.119 00:21:52.119 Latency(us) 00:21:52.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.119 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:52.119 Nvme1n1 : 1.00 136341.60 532.58 0.00 0.00 934.96 381.57 1103.76 00:21:52.119 =================================================================================================================== 00:21:52.119 Total : 136341.60 532.58 0.00 0.00 934.96 381.57 1103.76 00:21:52.119 00:21:52.119 Latency(us) 00:21:52.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.119 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:52.119 Nvme1n1 : 1.01 12209.88 47.69 0.00 0.00 10448.89 5622.30 19867.76 00:21:52.119 =================================================================================================================== 00:21:52.119 Total : 12209.88 47.69 0.00 0.00 10448.89 5622.30 19867.76 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@38 -- # wait 3122369 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@39 -- # wait 3122371 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@40 -- # wait 3122375 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.689 16:20:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.689 16:20:51 -- common/autotest_common.sh@10 -- # set +x 00:21:52.689 16:20:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:52.689 16:20:51 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:52.689 16:20:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:52.689 16:20:51 -- nvmf/common.sh@116 -- # sync 00:21:52.689 16:20:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:52.689 16:20:51 -- nvmf/common.sh@119 -- # set +e 00:21:52.689 16:20:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:52.689 16:20:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:52.689 rmmod nvme_tcp 00:21:52.689 rmmod nvme_fabrics 00:21:52.689 rmmod nvme_keyring 00:21:52.689 16:20:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:52.689 16:20:51 -- nvmf/common.sh@123 -- # set -e 00:21:52.689 16:20:51 -- nvmf/common.sh@124 -- # return 0 00:21:52.689 16:20:51 -- nvmf/common.sh@477 -- # '[' -n 3122166 ']' 00:21:52.689 16:20:51 -- nvmf/common.sh@478 -- # killprocess 3122166 00:21:52.689 16:20:51 -- common/autotest_common.sh@926 -- # '[' -z 3122166 ']' 00:21:52.689 16:20:51 -- common/autotest_common.sh@930 -- # kill -0 3122166 00:21:52.948 16:20:51 -- common/autotest_common.sh@931 -- # uname 00:21:52.948 16:20:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:52.948 16:20:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3122166 00:21:52.948 16:20:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:52.948 16:20:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:52.948 16:20:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3122166' 00:21:52.948 killing process with pid 3122166 00:21:52.948 16:20:51 -- common/autotest_common.sh@945 -- # kill 3122166 00:21:52.948 16:20:51 -- common/autotest_common.sh@950 -- # wait 3122166 00:21:53.205 16:20:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:53.205 16:20:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:53.205 16:20:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:53.205 16:20:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.205 16:20:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:53.205 16:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.205 16:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.205 16:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.748 16:20:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:55.748 00:21:55.748 real 0m11.143s 00:21:55.748 user 0m22.709s 00:21:55.748 sys 0m5.561s 00:21:55.748 16:20:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.748 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:21:55.748 ************************************ 00:21:55.748 END TEST nvmf_bdev_io_wait 00:21:55.748 ************************************ 00:21:55.748 16:20:54 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:55.748 16:20:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.748 16:20:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.748 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:21:55.748 ************************************ 00:21:55.748 START TEST nvmf_queue_depth 00:21:55.748 ************************************ 00:21:55.748 16:20:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:55.748 * Looking for test storage... 00:21:55.748 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:55.748 16:20:54 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.748 16:20:54 -- nvmf/common.sh@7 -- # uname -s 00:21:55.748 16:20:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.748 16:20:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.748 16:20:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.748 16:20:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.748 16:20:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.748 16:20:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.748 16:20:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.748 16:20:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.748 16:20:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.748 16:20:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.748 16:20:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:55.748 16:20:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:55.748 16:20:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.748 16:20:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.748 16:20:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:55.748 16:20:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:55.748 16:20:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.748 16:20:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.748 16:20:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.749 16:20:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.749 16:20:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.749 16:20:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.749 16:20:54 -- paths/export.sh@5 -- # export PATH 00:21:55.749 16:20:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.749 16:20:54 -- nvmf/common.sh@46 -- # : 0 00:21:55.749 16:20:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:55.749 16:20:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:55.749 16:20:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:55.749 16:20:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.749 16:20:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.749 16:20:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:55.749 16:20:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:55.749 16:20:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:55.749 16:20:54 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:55.749 16:20:54 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:55.749 16:20:54 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.749 16:20:54 -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:55.749 16:20:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:55.749 16:20:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.749 16:20:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:55.749 16:20:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:55.749 16:20:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:55.749 16:20:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.749 16:20:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.749 16:20:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.749 16:20:54 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:21:55.749 16:20:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:55.749 16:20:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:55.749 16:20:54 -- common/autotest_common.sh@10 -- # set +x 00:22:01.030 16:20:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:01.030 16:20:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:01.030 16:20:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:01.030 16:20:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:01.030 16:20:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:01.030 16:20:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:01.030 16:20:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:01.030 16:20:59 -- nvmf/common.sh@294 -- # net_devs=() 00:22:01.030 16:20:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:01.030 16:20:59 -- nvmf/common.sh@295 -- # e810=() 00:22:01.030 16:20:59 -- nvmf/common.sh@295 -- # local -ga e810 00:22:01.030 16:20:59 -- nvmf/common.sh@296 -- # x722=() 00:22:01.030 16:20:59 -- nvmf/common.sh@296 -- # local -ga x722 00:22:01.030 16:20:59 -- nvmf/common.sh@297 -- # mlx=() 00:22:01.030 16:20:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:01.030 16:20:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.030 16:20:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:01.030 16:20:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:01.030 16:20:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:01.030 16:20:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:01.030 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:01.030 16:20:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:01.030 16:20:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:01.030 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:01.030 16:20:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:01.030 16:20:59 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:01.030 16:20:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:01.030 16:20:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.030 16:20:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:01.030 16:20:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.030 16:20:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:01.030 Found net devices under 0000:27:00.0: cvl_0_0 00:22:01.030 16:20:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.030 16:20:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:01.030 16:20:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.030 16:20:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:01.030 16:20:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.031 16:20:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:01.031 Found net devices under 0000:27:00.1: cvl_0_1 00:22:01.031 16:20:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.031 16:20:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:01.031 16:20:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:01.031 16:20:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:01.031 16:20:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:01.031 16:20:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:01.031 16:20:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.031 16:20:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.031 16:20:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.031 16:20:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:01.031 16:20:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.031 16:20:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.031 16:20:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:01.031 16:20:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.031 16:20:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.031 16:20:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:01.031 16:20:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:01.031 16:20:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.031 16:20:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.031 16:20:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.031 16:20:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.031 16:20:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:01.031 16:20:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.031 16:20:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.031 16:20:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.031 16:20:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:01.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:22:01.031 00:22:01.031 --- 10.0.0.2 ping statistics --- 00:22:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.031 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:22:01.031 16:20:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:22:01.031 00:22:01.031 --- 10.0.0.1 ping statistics --- 00:22:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.031 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:01.031 16:20:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.031 16:20:59 -- nvmf/common.sh@410 -- # return 0 00:22:01.031 16:20:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:01.031 16:20:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.031 16:20:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:01.031 16:20:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:01.031 16:20:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.031 16:20:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:01.031 16:20:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:01.031 16:20:59 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:01.031 16:20:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:01.031 16:20:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:01.031 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:22:01.031 16:20:59 -- nvmf/common.sh@469 -- # nvmfpid=3126847 00:22:01.031 16:20:59 -- nvmf/common.sh@470 -- # waitforlisten 3126847 00:22:01.031 16:20:59 -- common/autotest_common.sh@819 -- # '[' -z 3126847 ']' 00:22:01.031 16:20:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.031 16:20:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.031 16:20:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.031 16:20:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.031 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:22:01.031 16:20:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.031 [2024-04-23 16:20:59.758272] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:01.031 [2024-04-23 16:20:59.758375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.031 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.031 [2024-04-23 16:20:59.880432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.292 [2024-04-23 16:20:59.977653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:01.292 [2024-04-23 16:20:59.977819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.292 [2024-04-23 16:20:59.977832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.292 [2024-04-23 16:20:59.977841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.292 [2024-04-23 16:20:59.977870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.553 16:21:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.553 16:21:00 -- common/autotest_common.sh@852 -- # return 0 00:22:01.553 16:21:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:01.553 16:21:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:01.553 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 16:21:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.813 16:21:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.813 16:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 [2024-04-23 16:21:00.511343] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.813 16:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.813 16:21:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.813 16:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 Malloc0 00:22:01.813 16:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.813 16:21:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.813 16:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 16:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.813 16:21:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.813 16:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 16:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.813 16:21:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.813 16:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.813 [2024-04-23 16:21:00.599889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.813 16:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.813 16:21:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=3127079 00:22:01.813 16:21:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.813 16:21:00 -- target/queue_depth.sh@33 -- # waitforlisten 3127079 /var/tmp/bdevperf.sock 00:22:01.813 16:21:00 -- common/autotest_common.sh@819 -- # '[' -z 3127079 ']' 00:22:01.813 16:21:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.813 16:21:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.813 16:21:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.813 16:21:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.813 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.814 16:21:00 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:01.814 [2024-04-23 16:21:00.675238] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:01.814 [2024-04-23 16:21:00.675345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127079 ] 00:22:02.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.072 [2024-04-23 16:21:00.793825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.072 [2024-04-23 16:21:00.891597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.639 16:21:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.639 16:21:01 -- common/autotest_common.sh@852 -- # return 0 00:22:02.639 16:21:01 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.639 16:21:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.639 16:21:01 -- common/autotest_common.sh@10 -- # set +x 00:22:02.900 NVMe0n1 00:22:02.900 16:21:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.900 16:21:01 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.900 Running I/O for 10 seconds... 00:22:12.888 00:22:12.888 Latency(us) 00:22:12.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.888 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:12.888 Verification LBA range: start 0x0 length 0x4000 00:22:12.888 NVMe0n1 : 10.05 18137.20 70.85 0.00 0.00 56304.01 9726.92 49393.45 00:22:12.888 =================================================================================================================== 00:22:12.888 Total : 18137.20 70.85 0.00 0.00 56304.01 9726.92 49393.45 00:22:12.888 0 00:22:12.888 16:21:11 -- target/queue_depth.sh@39 -- # killprocess 3127079 00:22:12.888 16:21:11 -- common/autotest_common.sh@926 -- # '[' -z 3127079 ']' 00:22:12.888 16:21:11 -- common/autotest_common.sh@930 -- # kill -0 3127079 00:22:12.888 16:21:11 -- common/autotest_common.sh@931 -- # uname 00:22:12.888 16:21:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:12.888 16:21:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3127079 00:22:12.888 16:21:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:12.888 16:21:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:12.888 16:21:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3127079' 00:22:12.888 killing process with pid 3127079 00:22:12.888 16:21:11 -- common/autotest_common.sh@945 -- # kill 3127079 00:22:12.888 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.888 00:22:12.888 Latency(us) 00:22:12.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.888 =================================================================================================================== 00:22:12.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.889 16:21:11 -- common/autotest_common.sh@950 -- # wait 3127079 00:22:13.458 16:21:12 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:13.458 16:21:12 -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:13.458 16:21:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.458 16:21:12 -- nvmf/common.sh@116 -- # sync 00:22:13.458 16:21:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.458 16:21:12 -- nvmf/common.sh@119 -- # set +e 00:22:13.458 16:21:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.458 16:21:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.458 rmmod nvme_tcp 00:22:13.458 rmmod nvme_fabrics 00:22:13.458 rmmod nvme_keyring 00:22:13.458 16:21:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.458 16:21:12 -- nvmf/common.sh@123 -- # set -e 00:22:13.458 16:21:12 -- nvmf/common.sh@124 -- # return 0 00:22:13.458 16:21:12 -- nvmf/common.sh@477 -- # '[' -n 3126847 ']' 00:22:13.458 16:21:12 -- nvmf/common.sh@478 -- # killprocess 3126847 00:22:13.458 16:21:12 -- common/autotest_common.sh@926 -- # '[' -z 3126847 ']' 00:22:13.458 16:21:12 -- common/autotest_common.sh@930 -- # kill -0 3126847 00:22:13.458 16:21:12 -- common/autotest_common.sh@931 -- # uname 00:22:13.458 16:21:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.458 16:21:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3126847 00:22:13.458 16:21:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:13.458 16:21:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:13.458 16:21:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3126847' 00:22:13.458 killing process with pid 3126847 00:22:13.458 16:21:12 -- common/autotest_common.sh@945 -- # kill 3126847 00:22:13.458 16:21:12 -- common/autotest_common.sh@950 -- # wait 3126847 00:22:14.025 16:21:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:14.025 16:21:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:14.025 16:21:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:14.025 16:21:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.025 16:21:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:14.025 16:21:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.025 16:21:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.025 16:21:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.932 16:21:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:15.932 00:22:15.932 real 0m20.650s 00:22:15.932 user 0m25.434s 00:22:15.932 sys 0m5.322s 00:22:15.932 16:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.932 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.932 ************************************ 00:22:15.932 END TEST nvmf_queue_depth 00:22:15.932 ************************************ 00:22:16.192 16:21:14 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:16.192 16:21:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:16.192 16:21:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:16.192 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:22:16.192 ************************************ 00:22:16.192 START TEST nvmf_multipath 00:22:16.192 ************************************ 00:22:16.192 16:21:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:16.192 * Looking for test storage... 00:22:16.192 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:16.192 16:21:14 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.192 16:21:14 -- nvmf/common.sh@7 -- # uname -s 00:22:16.192 16:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.192 16:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.192 16:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.192 16:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.192 16:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.192 16:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.192 16:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.192 16:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.192 16:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.192 16:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.192 16:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:16.192 16:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:16.192 16:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.192 16:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.192 16:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:16.192 16:21:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:16.192 16:21:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.192 16:21:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.192 16:21:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.192 16:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.192 16:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.192 16:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.192 16:21:14 -- paths/export.sh@5 -- # export PATH 00:22:16.192 16:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.192 16:21:14 -- nvmf/common.sh@46 -- # : 0 00:22:16.192 16:21:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:16.192 16:21:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:16.192 16:21:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:16.192 16:21:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.192 16:21:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.192 16:21:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:16.192 16:21:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:16.192 16:21:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:16.192 16:21:14 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.192 16:21:14 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.192 16:21:14 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:16.192 16:21:14 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:16.192 16:21:14 -- target/multipath.sh@43 -- # nvmftestinit 00:22:16.192 16:21:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:16.192 16:21:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.192 16:21:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:16.192 16:21:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:16.192 16:21:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:16.192 16:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.192 16:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.192 16:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.192 16:21:14 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:16.193 16:21:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:16.193 16:21:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:16.193 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.477 16:21:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:21.477 16:21:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:21.477 16:21:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:21.477 16:21:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:21.477 16:21:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:21.477 16:21:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:21.477 16:21:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:21.477 16:21:19 -- nvmf/common.sh@294 -- # net_devs=() 00:22:21.477 16:21:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:21.477 16:21:19 -- nvmf/common.sh@295 -- # e810=() 00:22:21.477 16:21:19 -- nvmf/common.sh@295 -- # local -ga e810 00:22:21.477 16:21:19 -- nvmf/common.sh@296 -- # x722=() 00:22:21.477 16:21:19 -- nvmf/common.sh@296 -- # local -ga x722 00:22:21.477 16:21:19 -- nvmf/common.sh@297 -- # mlx=() 00:22:21.477 16:21:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:21.477 16:21:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.477 16:21:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:21.477 16:21:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.477 16:21:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:21.477 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:21.477 16:21:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.477 16:21:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:21.477 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:21.477 16:21:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.477 16:21:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.477 16:21:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.477 16:21:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:21.477 Found net devices under 0000:27:00.0: cvl_0_0 00:22:21.477 16:21:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.477 16:21:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.477 16:21:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.477 16:21:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.477 16:21:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:21.477 Found net devices under 0000:27:00.1: cvl_0_1 00:22:21.477 16:21:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.477 16:21:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:21.477 16:21:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:21.477 16:21:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:21.477 16:21:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.477 16:21:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.477 16:21:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.477 16:21:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:21.477 16:21:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.477 16:21:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.477 16:21:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:21.477 16:21:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.477 16:21:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.477 16:21:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:21.477 16:21:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:21.477 16:21:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.477 16:21:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.477 16:21:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.477 16:21:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.477 16:21:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:21.477 16:21:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.477 16:21:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.477 16:21:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.477 16:21:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:21.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:22:21.477 00:22:21.477 --- 10.0.0.2 ping statistics --- 00:22:21.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.477 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:22:21.477 16:21:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:22:21.477 00:22:21.477 --- 10.0.0.1 ping statistics --- 00:22:21.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.477 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:21.477 16:21:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.477 16:21:20 -- nvmf/common.sh@410 -- # return 0 00:22:21.477 16:21:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:21.477 16:21:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.477 16:21:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:21.477 16:21:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:21.477 16:21:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.477 16:21:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:21.477 16:21:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:21.477 16:21:20 -- target/multipath.sh@45 -- # '[' -z ']' 00:22:21.477 16:21:20 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:21.477 only one NIC for nvmf test 00:22:21.477 16:21:20 -- target/multipath.sh@47 -- # nvmftestfini 00:22:21.477 16:21:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:21.477 16:21:20 -- nvmf/common.sh@116 -- # sync 00:22:21.477 16:21:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:21.477 16:21:20 -- nvmf/common.sh@119 -- # set +e 00:22:21.477 16:21:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:21.477 16:21:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:21.477 rmmod nvme_tcp 00:22:21.477 rmmod nvme_fabrics 00:22:21.477 rmmod nvme_keyring 00:22:21.477 16:21:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:21.477 16:21:20 -- nvmf/common.sh@123 -- # set -e 00:22:21.478 16:21:20 -- nvmf/common.sh@124 -- # return 0 00:22:21.478 16:21:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:21.478 16:21:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:21.478 16:21:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:21.478 16:21:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:21.478 16:21:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.478 16:21:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:21.478 16:21:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.478 16:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.478 16:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.389 16:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:23.389 16:21:22 -- target/multipath.sh@48 -- # exit 0 00:22:23.389 16:21:22 -- target/multipath.sh@1 -- # nvmftestfini 00:22:23.389 16:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:23.389 16:21:22 -- nvmf/common.sh@116 -- # sync 00:22:23.389 16:21:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:23.389 16:21:22 -- nvmf/common.sh@119 -- # set +e 00:22:23.389 16:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:23.389 16:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:23.389 16:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:23.389 16:21:22 -- nvmf/common.sh@123 -- # set -e 00:22:23.389 16:21:22 -- nvmf/common.sh@124 -- # return 0 00:22:23.389 16:21:22 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:23.389 16:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:23.389 16:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:23.389 16:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:23.389 16:21:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.389 16:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:23.389 16:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.389 16:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.389 16:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.389 16:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:23.389 00:22:23.389 real 0m7.434s 00:22:23.389 user 0m1.536s 00:22:23.389 sys 0m3.774s 00:22:23.389 16:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.389 16:21:22 -- common/autotest_common.sh@10 -- # set +x 00:22:23.389 ************************************ 00:22:23.389 END TEST nvmf_multipath 00:22:23.389 ************************************ 00:22:23.650 16:21:22 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:23.650 16:21:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:23.650 16:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:23.650 16:21:22 -- common/autotest_common.sh@10 -- # set +x 00:22:23.650 ************************************ 00:22:23.650 START TEST nvmf_zcopy 00:22:23.650 ************************************ 00:22:23.650 16:21:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:23.650 * Looking for test storage... 00:22:23.650 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:23.650 16:21:22 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.650 16:21:22 -- nvmf/common.sh@7 -- # uname -s 00:22:23.650 16:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.650 16:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.650 16:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.650 16:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.650 16:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.650 16:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.650 16:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.650 16:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.650 16:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.650 16:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.650 16:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:23.650 16:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:23.650 16:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.650 16:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.650 16:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:23.650 16:21:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:23.650 16:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.650 16:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.650 16:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.650 16:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.650 16:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.650 16:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.650 16:21:22 -- paths/export.sh@5 -- # export PATH 00:22:23.650 16:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.650 16:21:22 -- nvmf/common.sh@46 -- # : 0 00:22:23.650 16:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:23.650 16:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:23.650 16:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:23.650 16:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.650 16:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.650 16:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:23.650 16:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:23.650 16:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:23.650 16:21:22 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:23.650 16:21:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:23.650 16:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.650 16:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:23.650 16:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:23.650 16:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:23.650 16:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.650 16:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.650 16:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.650 16:21:22 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:23.650 16:21:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:23.650 16:21:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:23.650 16:21:22 -- common/autotest_common.sh@10 -- # set +x 00:22:28.937 16:21:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:28.937 16:21:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:28.937 16:21:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:28.937 16:21:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:28.937 16:21:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:28.937 16:21:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:28.937 16:21:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:28.937 16:21:27 -- nvmf/common.sh@294 -- # net_devs=() 00:22:28.937 16:21:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:28.937 16:21:27 -- nvmf/common.sh@295 -- # e810=() 00:22:28.937 16:21:27 -- nvmf/common.sh@295 -- # local -ga e810 00:22:28.937 16:21:27 -- nvmf/common.sh@296 -- # x722=() 00:22:28.937 16:21:27 -- nvmf/common.sh@296 -- # local -ga x722 00:22:28.937 16:21:27 -- nvmf/common.sh@297 -- # mlx=() 00:22:28.937 16:21:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:28.937 16:21:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.937 16:21:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:28.937 16:21:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:28.937 16:21:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.937 16:21:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:28.937 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:28.937 16:21:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.937 16:21:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:28.937 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:28.937 16:21:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:28.937 16:21:27 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:22:28.937 16:21:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.938 16:21:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.938 16:21:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.938 16:21:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.938 16:21:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:28.938 Found net devices under 0000:27:00.0: cvl_0_0 00:22:28.938 16:21:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.938 16:21:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.938 16:21:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.938 16:21:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.938 16:21:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.938 16:21:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:28.938 Found net devices under 0000:27:00.1: cvl_0_1 00:22:28.938 16:21:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.938 16:21:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:28.938 16:21:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:28.938 16:21:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:28.938 16:21:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:28.938 16:21:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:28.938 16:21:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.938 16:21:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.938 16:21:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.938 16:21:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:28.938 16:21:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.938 16:21:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.938 16:21:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:28.938 16:21:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.938 16:21:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.938 16:21:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:28.938 16:21:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:28.938 16:21:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.938 16:21:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.938 16:21:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.938 16:21:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.938 16:21:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:28.938 16:21:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.938 16:21:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.938 16:21:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.938 16:21:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:28.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:22:28.938 00:22:28.938 --- 10.0.0.2 ping statistics --- 00:22:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.938 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:28.938 16:21:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:22:28.938 00:22:28.938 --- 10.0.0.1 ping statistics --- 00:22:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.938 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:22:28.938 16:21:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.938 16:21:27 -- nvmf/common.sh@410 -- # return 0 00:22:28.938 16:21:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:28.938 16:21:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.938 16:21:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:28.938 16:21:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:28.938 16:21:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.938 16:21:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:28.938 16:21:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:28.938 16:21:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:28.938 16:21:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:28.938 16:21:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.938 16:21:27 -- common/autotest_common.sh@10 -- # set +x 00:22:28.938 16:21:27 -- nvmf/common.sh@469 -- # nvmfpid=3137755 00:22:28.938 16:21:27 -- nvmf/common.sh@470 -- # waitforlisten 3137755 00:22:28.938 16:21:27 -- common/autotest_common.sh@819 -- # '[' -z 3137755 ']' 00:22:28.938 16:21:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.938 16:21:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.938 16:21:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.938 16:21:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.938 16:21:27 -- common/autotest_common.sh@10 -- # set +x 00:22:28.938 16:21:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.197 [2024-04-23 16:21:27.910552] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:29.197 [2024-04-23 16:21:27.910671] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.197 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.197 [2024-04-23 16:21:28.031093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.197 [2024-04-23 16:21:28.127467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.197 [2024-04-23 16:21:28.127646] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.197 [2024-04-23 16:21:28.127660] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.197 [2024-04-23 16:21:28.127671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.197 [2024-04-23 16:21:28.127699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.767 16:21:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.767 16:21:28 -- common/autotest_common.sh@852 -- # return 0 00:22:29.767 16:21:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:29.767 16:21:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:29.767 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:29.767 16:21:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.767 16:21:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:29.767 16:21:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:29.767 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.768 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:29.768 [2024-04-23 16:21:28.657818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.768 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.768 16:21:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:29.768 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.768 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:29.768 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.768 16:21:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.768 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.768 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:29.768 [2024-04-23 16:21:28.678055] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.768 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.768 16:21:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.768 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.768 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:29.768 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.768 16:21:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:29.768 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.768 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:30.028 malloc0 00:22:30.028 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.028 16:21:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.028 16:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.028 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:22:30.028 16:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.028 16:21:28 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:30.028 16:21:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:30.028 16:21:28 -- nvmf/common.sh@520 -- # config=() 00:22:30.028 16:21:28 -- nvmf/common.sh@520 -- # local subsystem config 00:22:30.028 16:21:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:30.028 16:21:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:30.028 { 00:22:30.028 "params": { 00:22:30.028 "name": "Nvme$subsystem", 00:22:30.028 "trtype": "$TEST_TRANSPORT", 00:22:30.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.028 "adrfam": "ipv4", 00:22:30.028 "trsvcid": "$NVMF_PORT", 00:22:30.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.028 "hdgst": ${hdgst:-false}, 00:22:30.028 "ddgst": ${ddgst:-false} 00:22:30.028 }, 00:22:30.028 "method": "bdev_nvme_attach_controller" 00:22:30.028 } 00:22:30.028 EOF 00:22:30.028 )") 00:22:30.028 16:21:28 -- nvmf/common.sh@542 -- # cat 00:22:30.028 16:21:28 -- nvmf/common.sh@544 -- # jq . 00:22:30.029 16:21:28 -- nvmf/common.sh@545 -- # IFS=, 00:22:30.029 16:21:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:30.029 "params": { 00:22:30.029 "name": "Nvme1", 00:22:30.029 "trtype": "tcp", 00:22:30.029 "traddr": "10.0.0.2", 00:22:30.029 "adrfam": "ipv4", 00:22:30.029 "trsvcid": "4420", 00:22:30.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.029 "hdgst": false, 00:22:30.029 "ddgst": false 00:22:30.029 }, 00:22:30.029 "method": "bdev_nvme_attach_controller" 00:22:30.029 }' 00:22:30.029 [2024-04-23 16:21:28.807787] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:30.029 [2024-04-23 16:21:28.807902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137865 ] 00:22:30.029 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.029 [2024-04-23 16:21:28.925779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.288 [2024-04-23 16:21:29.025092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.547 Running I/O for 10 seconds... 00:22:40.653 00:22:40.653 Latency(us) 00:22:40.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.653 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:40.653 Verification LBA range: start 0x0 length 0x1000 00:22:40.653 Nvme1n1 : 10.01 13319.24 104.06 0.00 0.00 9589.05 1215.87 18350.08 00:22:40.653 =================================================================================================================== 00:22:40.653 Total : 13319.24 104.06 0.00 0.00 9589.05 1215.87 18350.08 00:22:40.912 16:21:39 -- target/zcopy.sh@39 -- # perfpid=3139943 00:22:40.912 16:21:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:22:40.912 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:22:40.912 16:21:39 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:40.912 16:21:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:40.912 16:21:39 -- nvmf/common.sh@520 -- # config=() 00:22:40.912 16:21:39 -- nvmf/common.sh@520 -- # local subsystem config 00:22:40.912 16:21:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:40.912 16:21:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:40.912 { 00:22:40.912 "params": { 00:22:40.912 "name": "Nvme$subsystem", 00:22:40.912 "trtype": "$TEST_TRANSPORT", 00:22:40.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.912 "adrfam": "ipv4", 00:22:40.912 "trsvcid": "$NVMF_PORT", 00:22:40.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.912 "hdgst": ${hdgst:-false}, 00:22:40.912 "ddgst": ${ddgst:-false} 00:22:40.912 }, 00:22:40.912 "method": "bdev_nvme_attach_controller" 00:22:40.912 } 00:22:40.912 EOF 00:22:40.912 )") 00:22:40.912 16:21:39 -- nvmf/common.sh@542 -- # cat 00:22:40.912 [2024-04-23 16:21:39.685644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.685698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 16:21:39 -- nvmf/common.sh@544 -- # jq . 00:22:40.912 16:21:39 -- nvmf/common.sh@545 -- # IFS=, 00:22:40.912 16:21:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:40.912 "params": { 00:22:40.912 "name": "Nvme1", 00:22:40.912 "trtype": "tcp", 00:22:40.912 "traddr": "10.0.0.2", 00:22:40.912 "adrfam": "ipv4", 00:22:40.912 "trsvcid": "4420", 00:22:40.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.912 "hdgst": false, 00:22:40.912 "ddgst": false 00:22:40.912 }, 00:22:40.912 "method": "bdev_nvme_attach_controller" 00:22:40.912 }' 00:22:40.912 [2024-04-23 16:21:39.693561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.693594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.701532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.701552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.709537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.709557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.717543] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.717562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.725525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.725544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.733539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.733557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.912 [2024-04-23 16:21:39.741539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.912 [2024-04-23 16:21:39.741563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.748123] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:22:40.913 [2024-04-23 16:21:39.748235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139943 ] 00:22:40.913 [2024-04-23 16:21:39.749528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.749546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.757542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.757560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.765531] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.765548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.773546] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.773564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.781549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.781566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.789541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.789558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.797547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.797562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.805551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.805576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.813542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.813574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.913 [2024-04-23 16:21:39.821551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.821567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.829561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.829577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:40.913 [2024-04-23 16:21:39.837557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:40.913 [2024-04-23 16:21:39.837575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.845561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.845579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.853555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.853571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.857171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.171 [2024-04-23 16:21:39.861566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.861583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.869566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.869585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.877561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.877577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.885570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.885584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.171 [2024-04-23 16:21:39.893566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.171 [2024-04-23 16:21:39.893581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.901575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.901590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.909579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.909594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.917570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.917585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.925589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.925604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.933583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.933603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.941578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.941593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.949586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.949601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.953145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.172 [2024-04-23 16:21:39.957580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.957595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.965586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.965601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.973588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.973603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.981586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.981601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.989597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.989612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:39.997601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:39.997615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.005635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.005657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.013643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.013666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.021614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.021634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.029635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.029652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.037634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.037649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.045613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.045626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.053624] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.053642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.061621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.061639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.069617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.069632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.077758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.077772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.085623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.085639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.093637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.093651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.172 [2024-04-23 16:21:40.101641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.172 [2024-04-23 16:21:40.101655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.109637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.109651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.117674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.117700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.125665] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.125685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.133652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.133669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.141670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.141692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.149658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.149672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.157669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.157684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.165670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.165684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.173660] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.173674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.181674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.181689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.189690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.189708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.197678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.197696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.205686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.205700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.213689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.213703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.221688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.221702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.229685] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.229698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.237688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.237706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.245702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.245717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.253706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.253720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.261695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.261709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.269704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.269719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.277708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.277722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.285725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.285744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.293720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.293734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.301719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.301732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.309730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.309745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.317724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.317738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.325721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.325736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.335331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.335356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.341744] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.341763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 Running I/O for 5 seconds... 00:22:41.431 [2024-04-23 16:21:40.349743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.349757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.431 [2024-04-23 16:21:40.361381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.431 [2024-04-23 16:21:40.361408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.368869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.368894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.377741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.377765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.386992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.387017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.395699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.395722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.404212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.404238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.413023] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.413048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.421956] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.421982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.430790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.430814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.440233] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.440259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.449222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.449247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.458768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.458793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.466693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.466718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.474183] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.474207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.482112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.482137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.492895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.492920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.501655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.501678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.510605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.510634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.517399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.517422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.527564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.527588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.536105] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.536129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.545250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.545275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.554292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.554316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.563491] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.563518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.572656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.572685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.581490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.581514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.590199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.590223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.599225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.599250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.607887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.607910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.689 [2024-04-23 16:21:40.616741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.689 [2024-04-23 16:21:40.616765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.626164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.626188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.635073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.635099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.644662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.644686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.652972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.653001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.662152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.662177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.670342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.670366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.679184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.679208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.688501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.688525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.697726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.697750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.707004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.707030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.716462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.716485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.725608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.725637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.734196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.734220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.743102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.743126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.752362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.752387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.761849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.761873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.770123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.770148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.778735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.778759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.787455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.787479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.796504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.796526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.805879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.805903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.814826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.814849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.823846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.823873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.832541] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.832563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.841583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.841605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.850701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.850726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.860352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.860376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.869329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.869352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:41.949 [2024-04-23 16:21:40.878671] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:41.949 [2024-04-23 16:21:40.878698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.887569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.887594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.896702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.896725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.905115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.905139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.913864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.913886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.922892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.922918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.931650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.931675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.941522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.941548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.949927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.949952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.959255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.959280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.968057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.968080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.977170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.977195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.986609] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.986637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:40.995806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:40.995836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.004548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.004575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.013699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.013723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.022700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.022725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.031425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.031450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.040724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.040750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.049716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.049742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.059051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.059078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.067364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.067388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.076376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.076403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.085096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.085119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.094184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.094211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.103508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.103535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.112982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.113014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.122726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.122751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.131949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.131976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.209 [2024-04-23 16:21:41.140758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.209 [2024-04-23 16:21:41.140783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.149910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.149942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.158730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.158754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.167691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.167719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.176954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.176979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.185880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.185905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.195091] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.195117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.204408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.204431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.213994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.214017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.222288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.222314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.231521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.231546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.240564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.240591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.249994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.250019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.258293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.258318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.267162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.267188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.276432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.276457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.285199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.468 [2024-04-23 16:21:41.285225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.468 [2024-04-23 16:21:41.294430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.294455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.303058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.303083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.312426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.312451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.321758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.321784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.330483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.330507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.339564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.339591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.348548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.348572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.357964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.357988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.367120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.367144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.376006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.376032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.385106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.385131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.469 [2024-04-23 16:21:41.393604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.469 [2024-04-23 16:21:41.393636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.402834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.402859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.411977] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.412003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.420832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.420854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.430192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.430217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.439029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.439054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.448287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.448311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.457652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.457676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.466655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.466681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.475981] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.476004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.485067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.485092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.494041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.494065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.502972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.502998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.512209] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.512232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.521424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.521448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.530588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.530613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.539859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.539883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.548192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.548216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.557299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.557323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.566254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.566279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.575534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.575558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.584895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.584920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.594073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.594097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.603675] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.603699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.612912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.612936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.621965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.621989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.631252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.631275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.640606] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.640633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.649735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.649759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.729 [2024-04-23 16:21:41.659054] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.729 [2024-04-23 16:21:41.659081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.667898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.667922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.676664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.676687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.685637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.685662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.694826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.694850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.703755] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.703782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.712499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.712522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.722187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.722212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.731065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.731093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.740239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.740263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.749051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.749073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.757858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.757883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.767081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.767104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.775388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.775411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.784311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.784335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.793159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.793183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.802535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.802559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.811659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.811682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.820678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.820701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.830018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.830042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.839131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.839153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.848575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.848605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.857359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.857383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.866431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.866457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.875213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.875237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.884346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.884371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.893702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.893726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.988 [2024-04-23 16:21:41.902507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.988 [2024-04-23 16:21:41.902531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:42.989 [2024-04-23 16:21:41.912313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:42.989 [2024-04-23 16:21:41.912339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.920843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.920867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.929991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.930014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.939268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.939294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.948586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.948609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.957586] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.957609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.966785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.966810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.975938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.975962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.985188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.985211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:41.994426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:41.994450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.003745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.003768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.012689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.012713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.021337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.021363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.030680] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.030705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.039920] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.039944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.048917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.048940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.058637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.058660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.067968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.067992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.077233] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.077255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.086344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.086369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.095070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.095094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.104650] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.104676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.114093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.114119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.123226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.123250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.132633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.132656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.141371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.141397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.150518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.150543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.247 [2024-04-23 16:21:42.159256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.247 [2024-04-23 16:21:42.159282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.248 [2024-04-23 16:21:42.168487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.248 [2024-04-23 16:21:42.168511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.248 [2024-04-23 16:21:42.177316] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.248 [2024-04-23 16:21:42.177342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.186241] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.186268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.195485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.195513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.204325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.204347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.213622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.213652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.222262] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.222286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.232034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.232060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.240615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.240643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.506 [2024-04-23 16:21:42.249893] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.506 [2024-04-23 16:21:42.249918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.258687] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.258712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.267547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.267572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.276772] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.276795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.285122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.285146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.294182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.294205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.302803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.302828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.311849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.311879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.321003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.321028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.330204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.330227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.339376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.339400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.347729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.347752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.357026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.357052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.366129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.366156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.375332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.375358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.383712] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.383737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.392352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.392376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.401613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.401643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.410346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.410369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.419061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.419085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.428195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.428218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.507 [2024-04-23 16:21:42.437480] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.507 [2024-04-23 16:21:42.437506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.446826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.446851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.455287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.455313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.464102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.464126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.472919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.472943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.482577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.482602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.491768] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.491795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.500607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.500635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.509801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.509826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.518970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.518995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.527972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.527995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.537510] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.537534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.546844] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.546870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.555096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.555120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.564539] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.564568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.572235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.572260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.582727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.582753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.591815] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.591839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.600922] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.600948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.610217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.610242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.619272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.619298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.628148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.628172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.636929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.636956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.646178] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.646202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.655124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.767 [2024-04-23 16:21:42.655149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.767 [2024-04-23 16:21:42.664285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.768 [2024-04-23 16:21:42.664309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.768 [2024-04-23 16:21:42.673598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.768 [2024-04-23 16:21:42.673625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.768 [2024-04-23 16:21:42.682702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.768 [2024-04-23 16:21:42.682727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:43.768 [2024-04-23 16:21:42.691999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:43.768 [2024-04-23 16:21:42.692025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.701099] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.701124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.710000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.710024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.718836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.718860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.728052] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.728075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.737229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.737253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.746648] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.746672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.755065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.755087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.764154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.764179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.773244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.773268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.782479] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.782502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.791746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.791772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.801074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.801099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.810358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.810384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.819349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.819373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.828550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.828575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.837617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.837647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.847038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.847064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.856322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.856345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.865418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.865444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.874752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.874776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.883580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.883606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.892990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.893020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.902126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.902153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.911234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.911259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.919964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.919990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.928526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.928550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.937730] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.937757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.946795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.946821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.026 [2024-04-23 16:21:42.956228] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.026 [2024-04-23 16:21:42.956254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:42.965067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:42.965091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:42.974222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:42.974247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:42.983202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:42.983228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:42.992131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:42.992155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.000752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.000775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.009964] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.009989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.019613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.019641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.027799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.027823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.037083] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.037109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.045995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.046020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.054784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.054809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.064028] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.064054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.072634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.072657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.081352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.081378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.090678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.090702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.100088] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.100114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.109164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.109187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.285 [2024-04-23 16:21:43.118504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.285 [2024-04-23 16:21:43.118529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.126832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.126855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.135635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.135660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.145182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.145206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.153484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.153509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.162726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.162750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.172078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.172104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.180569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.180593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.189986] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.190011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.198909] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.198936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.208292] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.208319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.286 [2024-04-23 16:21:43.217406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.286 [2024-04-23 16:21:43.217435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.226795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.226822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.236068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.236092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.245248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.245274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.254517] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.254540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.263889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.263915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.272343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.272368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.281721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.281746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.290856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.290880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.299689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.299712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.308353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.308375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.317467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.317494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.326905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.326930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.335781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.335807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.344891] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.344916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.354229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.354255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.547 [2024-04-23 16:21:43.362646] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.547 [2024-04-23 16:21:43.362672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.371413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.371436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.380398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.380423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.389176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.389204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.398469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.398492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.407412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.407437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.416106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.416129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.425328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.425353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.434450] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.434473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.443615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.443643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.452006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.452029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.461286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.461312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.548 [2024-04-23 16:21:43.470639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.548 [2024-04-23 16:21:43.470681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.480010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.480037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.488356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.488379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.497432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.497457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.506658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.506681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.515946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.515972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.524737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.524760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.534840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.534870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.544063] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.544088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.553500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.553527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.562718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.562749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.571320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.571344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.580296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.580321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.589555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.589580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.598288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.598312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.607438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.607464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.616346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.616370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.625649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.625672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.634994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.635022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.642244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.642268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.652530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.652555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.661579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.661603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.670757] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.670781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.679589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.679612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.688805] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.688830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.697720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.697742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.706874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.706900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.716188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.716214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.730067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.730092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:44.807 [2024-04-23 16:21:43.738658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:44.807 [2024-04-23 16:21:43.738686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.747780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.747804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.756837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.756860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.766130] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.766156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.775518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.775542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.784832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.784858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.793246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.793270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.802655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.802680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.811912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.811936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.821165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.821191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.830274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.830298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.838967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.838992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.847556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.847580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.856204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.856229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.865444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.865468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.874707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.874732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.883726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.883750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.893209] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.893235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.902469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.902492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.910818] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.910841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.919880] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.919904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.928596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.928619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.937336] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.937361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.946634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.946659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.955662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.955687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.964959] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.964983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.973806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.973830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.983078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.983101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.066 [2024-04-23 16:21:43.992499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.066 [2024-04-23 16:21:43.992523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.323 [2024-04-23 16:21:44.001989] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.323 [2024-04-23 16:21:44.002016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.323 [2024-04-23 16:21:44.010322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.323 [2024-04-23 16:21:44.010348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.323 [2024-04-23 16:21:44.019626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.323 [2024-04-23 16:21:44.019658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.323 [2024-04-23 16:21:44.027898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.323 [2024-04-23 16:21:44.027921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.323 [2024-04-23 16:21:44.037109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.037135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.046070] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.046094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.055379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.055410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.064290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.064313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.073282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.073310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.080914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.080941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.091195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.091220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.100202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.100226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.109269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.109296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.118261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.118286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.127556] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.127579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.136548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.136573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.145421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.145445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.154812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.154836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.163037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.163060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.172149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.172175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.181500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.181525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.190785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.190811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.199430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.199455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.208852] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.208877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.217782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.217806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.226078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.226102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.235288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.235311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.244019] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.244045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.324 [2024-04-23 16:21:44.253291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.324 [2024-04-23 16:21:44.253316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.262303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.262329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.271089] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.271112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.279758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.279782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.288725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.288749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.297985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.298012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.306879] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.306905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.316211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.316237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.325368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.325394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.334575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.334599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.343392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.343418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.352610] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.352639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.361894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.361921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.370894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.370918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.379154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.379179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.388391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.388418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.397460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.397485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.406771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.406798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.415838] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.415863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.424997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.425023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.433508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.433532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.442249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.442274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.451490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.451514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.460377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.583 [2024-04-23 16:21:44.460403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.583 [2024-04-23 16:21:44.469621] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.469649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.584 [2024-04-23 16:21:44.478495] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.478520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.584 [2024-04-23 16:21:44.487672] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.487695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.584 [2024-04-23 16:21:44.496674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.496699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.584 [2024-04-23 16:21:44.505778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.505803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.584 [2024-04-23 16:21:44.514806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.584 [2024-04-23 16:21:44.514833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.523901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.523925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.533238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.533264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.542025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.542049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.551249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.551275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.560636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.560660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.569308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.569332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.578711] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.578737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.588254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.588284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.597286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.597310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.606429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.606455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.844 [2024-04-23 16:21:44.615264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.844 [2024-04-23 16:21:44.615287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.624102] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.624127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.633235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.633264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.642233] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.642255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.651440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.651467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.660895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.660922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.669925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.669949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.678283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.678308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.687544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.687566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.696845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.696871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.706158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.706183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.715239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.715263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.724418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.724442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.733593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.733617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.742414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.742437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.751157] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.751182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.760113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.760142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:45.845 [2024-04-23 16:21:44.769381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:45.845 [2024-04-23 16:21:44.769406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.778267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.778294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.787511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.787535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.797042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.797068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.806337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.806362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.815709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.815732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.824947] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.824973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.834333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.834357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.843670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.843697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.852917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.852941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.861736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.861759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.871050] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.871075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.880354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.880377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.889194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.889218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.898306] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.898329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.907601] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.907627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.916562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.916586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.926469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.926496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.934966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.934994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.943990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.944014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.953388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.953413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.962688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.962712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.971823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.971846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.981160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.981184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.989888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.989910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:44.999009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:44.999034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:45.007663] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:45.007686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:45.016973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.105 [2024-04-23 16:21:45.016997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.105 [2024-04-23 16:21:45.025953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.106 [2024-04-23 16:21:45.025977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.106 [2024-04-23 16:21:45.034225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.106 [2024-04-23 16:21:45.034251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.043589] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.043615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.052774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.052798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.061120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.061144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.070394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.070421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.079394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.079418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.088608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.088637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.097824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.097848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.107123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.107151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.116189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.116214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.125535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.125559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.135317] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.135340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.144515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.144539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.152615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.152644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.161812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.161838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.171075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.171100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.179904] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.179928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.189273] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.189300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.198126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.198151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.206760] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.206785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.216163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.216192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.225294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.225319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.234720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.234745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.243127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.243152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.252376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.252400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.261748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.261774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.271093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.271117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.279762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.367 [2024-04-23 16:21:45.279787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.367 [2024-04-23 16:21:45.288987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.368 [2024-04-23 16:21:45.289011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.368 [2024-04-23 16:21:45.297924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.368 [2024-04-23 16:21:45.297948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.307222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.307246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.316075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.316099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.325367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.325391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.334332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.334356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.343718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.343744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.351954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.351978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 00:22:46.628 Latency(us) 00:22:46.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.628 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:46.628 Nvme1n1 : 5.01 18004.14 140.66 0.00 0.00 7102.32 2431.73 18626.02 00:22:46.628 =================================================================================================================== 00:22:46.628 Total : 18004.14 140.66 0.00 0.00 7102.32 2431.73 18626.02 00:22:46.628 [2024-04-23 16:21:45.358375] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.358399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.628 [2024-04-23 16:21:45.366352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.628 [2024-04-23 16:21:45.366373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.374357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.374371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.382359] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.382373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.390348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.390361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.398362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.398378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.406364] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.406377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.414367] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.414381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.422366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.422379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.430357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.430370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.438369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.438383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.446379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.446393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.454366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.454378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.462376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.462388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.470370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.470386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.478381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.478398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.486387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.486403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.494380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.494397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.502398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.502415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.510389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.510405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.518383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.518397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.526394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.526408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.534391] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.534405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.542407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.542421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.550405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.550421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.629 [2024-04-23 16:21:45.558412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.629 [2024-04-23 16:21:45.558429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.888 [2024-04-23 16:21:45.566421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.888 [2024-04-23 16:21:45.566439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.888 [2024-04-23 16:21:45.574419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.888 [2024-04-23 16:21:45.574435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.888 [2024-04-23 16:21:45.582411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.888 [2024-04-23 16:21:45.582425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.888 [2024-04-23 16:21:45.590416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.888 [2024-04-23 16:21:45.590432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.888 [2024-04-23 16:21:45.598431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.598448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.606429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.606444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.614436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.614452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.622435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.622449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.630431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.630446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.638439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.638454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.646441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.646455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.654442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.654457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.662441] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.662456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.670446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.670460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.678446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.678461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.686442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.686456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.694463] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.694481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.702459] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.702473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.710451] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.710465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 [2024-04-23 16:21:45.718464] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:46.889 [2024-04-23 16:21:45.718479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:46.889 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3139943) - No such process 00:22:46.889 16:21:45 -- target/zcopy.sh@49 -- # wait 3139943 00:22:46.889 16:21:45 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:46.889 16:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.889 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:22:46.889 16:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.889 16:21:45 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:46.889 16:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.889 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:22:46.889 delay0 00:22:46.889 16:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.889 16:21:45 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:46.889 16:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.889 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:22:46.889 16:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.889 16:21:45 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:46.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.149 [2024-04-23 16:21:45.911746] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:53.721 Initializing NVMe Controllers 00:22:53.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.721 Initialization complete. Launching workers. 00:22:53.721 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 794 00:22:53.721 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1068, failed to submit 46 00:22:53.721 success 882, unsuccess 186, failed 0 00:22:53.721 16:21:52 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:53.721 16:21:52 -- target/zcopy.sh@60 -- # nvmftestfini 00:22:53.721 16:21:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:53.721 16:21:52 -- nvmf/common.sh@116 -- # sync 00:22:53.721 16:21:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:53.721 16:21:52 -- nvmf/common.sh@119 -- # set +e 00:22:53.721 16:21:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:53.721 16:21:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:53.721 rmmod nvme_tcp 00:22:53.721 rmmod nvme_fabrics 00:22:53.721 rmmod nvme_keyring 00:22:53.721 16:21:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.721 16:21:52 -- nvmf/common.sh@123 -- # set -e 00:22:53.721 16:21:52 -- nvmf/common.sh@124 -- # return 0 00:22:53.721 16:21:52 -- nvmf/common.sh@477 -- # '[' -n 3137755 ']' 00:22:53.721 16:21:52 -- nvmf/common.sh@478 -- # killprocess 3137755 00:22:53.721 16:21:52 -- common/autotest_common.sh@926 -- # '[' -z 3137755 ']' 00:22:53.721 16:21:52 -- common/autotest_common.sh@930 -- # kill -0 3137755 00:22:53.721 16:21:52 -- common/autotest_common.sh@931 -- # uname 00:22:53.721 16:21:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:53.721 16:21:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3137755 00:22:53.721 16:21:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:53.721 16:21:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:53.721 16:21:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3137755' 00:22:53.721 killing process with pid 3137755 00:22:53.721 16:21:52 -- common/autotest_common.sh@945 -- # kill 3137755 00:22:53.721 16:21:52 -- common/autotest_common.sh@950 -- # wait 3137755 00:22:53.982 16:21:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.982 16:21:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.982 16:21:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.982 16:21:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.982 16:21:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.982 16:21:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.982 16:21:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.982 16:21:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.890 16:21:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:55.890 00:22:55.890 real 0m32.392s 00:22:55.890 user 0m45.747s 00:22:55.890 sys 0m8.960s 00:22:55.890 16:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.890 16:21:54 -- common/autotest_common.sh@10 -- # set +x 00:22:55.890 ************************************ 00:22:55.890 END TEST nvmf_zcopy 00:22:55.890 ************************************ 00:22:55.890 16:21:54 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:55.890 16:21:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:55.890 16:21:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.890 16:21:54 -- common/autotest_common.sh@10 -- # set +x 00:22:55.890 ************************************ 00:22:55.890 START TEST nvmf_nmic 00:22:55.890 ************************************ 00:22:55.890 16:21:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:56.151 * Looking for test storage... 00:22:56.151 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:22:56.151 16:21:54 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.151 16:21:54 -- nvmf/common.sh@7 -- # uname -s 00:22:56.151 16:21:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.151 16:21:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.151 16:21:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.151 16:21:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.151 16:21:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.151 16:21:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.151 16:21:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.151 16:21:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.151 16:21:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.151 16:21:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.151 16:21:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:56.151 16:21:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:56.151 16:21:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.151 16:21:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.151 16:21:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:56.151 16:21:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:56.151 16:21:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.151 16:21:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.151 16:21:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.152 16:21:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.152 16:21:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.152 16:21:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.152 16:21:54 -- paths/export.sh@5 -- # export PATH 00:22:56.152 16:21:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.152 16:21:54 -- nvmf/common.sh@46 -- # : 0 00:22:56.152 16:21:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:56.152 16:21:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:56.152 16:21:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:56.152 16:21:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.152 16:21:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.152 16:21:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:56.152 16:21:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:56.152 16:21:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:56.152 16:21:54 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:56.152 16:21:54 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:56.152 16:21:54 -- target/nmic.sh@14 -- # nvmftestinit 00:22:56.152 16:21:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:56.152 16:21:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.152 16:21:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:56.152 16:21:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:56.152 16:21:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:56.152 16:21:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.152 16:21:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.152 16:21:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.152 16:21:54 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:22:56.152 16:21:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:56.152 16:21:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:56.152 16:21:54 -- common/autotest_common.sh@10 -- # set +x 00:23:01.435 16:21:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.435 16:21:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:01.435 16:21:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:01.435 16:21:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:01.435 16:21:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:01.435 16:21:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:01.435 16:21:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:01.435 16:21:59 -- nvmf/common.sh@294 -- # net_devs=() 00:23:01.435 16:21:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:01.435 16:21:59 -- nvmf/common.sh@295 -- # e810=() 00:23:01.435 16:21:59 -- nvmf/common.sh@295 -- # local -ga e810 00:23:01.435 16:21:59 -- nvmf/common.sh@296 -- # x722=() 00:23:01.435 16:21:59 -- nvmf/common.sh@296 -- # local -ga x722 00:23:01.435 16:21:59 -- nvmf/common.sh@297 -- # mlx=() 00:23:01.435 16:21:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:01.435 16:21:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.435 16:21:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:01.435 16:21:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.435 16:21:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:01.435 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:01.435 16:21:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.435 16:21:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:01.435 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:01.435 16:21:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.435 16:21:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.435 16:21:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.435 16:21:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:01.435 Found net devices under 0000:27:00.0: cvl_0_0 00:23:01.435 16:21:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.435 16:21:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.435 16:21:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.435 16:21:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.435 16:21:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:01.435 Found net devices under 0000:27:00.1: cvl_0_1 00:23:01.435 16:21:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.435 16:21:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:01.435 16:21:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:01.435 16:21:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:01.435 16:21:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.435 16:21:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.435 16:21:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.435 16:21:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:01.435 16:21:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.435 16:21:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.435 16:21:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:01.435 16:21:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.435 16:21:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.435 16:21:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:01.435 16:21:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:01.435 16:21:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.435 16:21:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.435 16:21:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.435 16:21:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.435 16:21:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:01.435 16:21:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.436 16:21:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.436 16:21:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.436 16:21:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:01.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:23:01.436 00:23:01.436 --- 10.0.0.2 ping statistics --- 00:23:01.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.436 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:01.436 16:21:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:23:01.436 00:23:01.436 --- 10.0.0.1 ping statistics --- 00:23:01.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.436 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:01.436 16:21:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.436 16:21:59 -- nvmf/common.sh@410 -- # return 0 00:23:01.436 16:21:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:01.436 16:21:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.436 16:21:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:01.436 16:21:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:01.436 16:21:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.436 16:21:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:01.436 16:21:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:01.436 16:21:59 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:01.436 16:21:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:01.436 16:21:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:01.436 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:23:01.436 16:21:59 -- nvmf/common.sh@469 -- # nvmfpid=3146440 00:23:01.436 16:21:59 -- nvmf/common.sh@470 -- # waitforlisten 3146440 00:23:01.436 16:21:59 -- common/autotest_common.sh@819 -- # '[' -z 3146440 ']' 00:23:01.436 16:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.436 16:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:01.436 16:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.436 16:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:01.436 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:23:01.436 16:21:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.436 [2024-04-23 16:22:00.057784] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:01.436 [2024-04-23 16:22:00.057891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.436 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.436 [2024-04-23 16:22:00.177439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.436 [2024-04-23 16:22:00.275679] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:01.436 [2024-04-23 16:22:00.275845] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.436 [2024-04-23 16:22:00.275857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.436 [2024-04-23 16:22:00.275865] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.436 [2024-04-23 16:22:00.275937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.436 [2024-04-23 16:22:00.275993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.436 [2024-04-23 16:22:00.276095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.436 [2024-04-23 16:22:00.276106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.010 16:22:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.010 16:22:00 -- common/autotest_common.sh@852 -- # return 0 00:23:02.010 16:22:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:02.010 16:22:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 16:22:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.010 16:22:00 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 [2024-04-23 16:22:00.814792] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 Malloc0 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 [2024-04-23 16:22:00.877464] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:02.010 test case1: single bdev can't be used in multiple subsystems 00:23:02.010 16:22:00 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@28 -- # nmic_status=0 00:23:02.010 16:22:00 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 [2024-04-23 16:22:00.901296] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:02.010 [2024-04-23 16:22:00.901325] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:02.010 [2024-04-23 16:22:00.901338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:02.010 request: 00:23:02.010 { 00:23:02.010 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.010 "namespace": { 00:23:02.010 "bdev_name": "Malloc0" 00:23:02.010 }, 00:23:02.010 "method": "nvmf_subsystem_add_ns", 00:23:02.010 "req_id": 1 00:23:02.010 } 00:23:02.010 Got JSON-RPC error response 00:23:02.010 response: 00:23:02.010 { 00:23:02.010 "code": -32602, 00:23:02.010 "message": "Invalid parameters" 00:23:02.010 } 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@29 -- # nmic_status=1 00:23:02.010 16:22:00 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:02.010 16:22:00 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:02.010 Adding namespace failed - expected result. 00:23:02.010 16:22:00 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:02.010 test case2: host connect to nvmf target in multiple paths 00:23:02.010 16:22:00 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.010 16:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.010 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.010 [2024-04-23 16:22:00.909443] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.010 16:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.010 16:22:00 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:03.917 16:22:02 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:05.295 16:22:03 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:05.295 16:22:03 -- common/autotest_common.sh@1177 -- # local i=0 00:23:05.295 16:22:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:05.295 16:22:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:05.295 16:22:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:07.204 16:22:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:07.204 16:22:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:07.204 16:22:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:07.204 16:22:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:07.204 16:22:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:07.204 16:22:05 -- common/autotest_common.sh@1187 -- # return 0 00:23:07.204 16:22:05 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:07.204 [global] 00:23:07.204 thread=1 00:23:07.204 invalidate=1 00:23:07.204 rw=write 00:23:07.204 time_based=1 00:23:07.204 runtime=1 00:23:07.204 ioengine=libaio 00:23:07.204 direct=1 00:23:07.204 bs=4096 00:23:07.204 iodepth=1 00:23:07.204 norandommap=0 00:23:07.204 numjobs=1 00:23:07.204 00:23:07.204 verify_dump=1 00:23:07.204 verify_backlog=512 00:23:07.204 verify_state_save=0 00:23:07.204 do_verify=1 00:23:07.204 verify=crc32c-intel 00:23:07.204 [job0] 00:23:07.204 filename=/dev/nvme0n1 00:23:07.204 Could not set queue depth (nvme0n1) 00:23:07.464 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:07.464 fio-3.35 00:23:07.464 Starting 1 thread 00:23:08.845 00:23:08.845 job0: (groupid=0, jobs=1): err= 0: pid=3147827: Tue Apr 23 16:22:07 2024 00:23:08.845 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:23:08.845 slat (nsec): min=9696, max=45389, avg=35559.38, stdev=8979.28 00:23:08.845 clat (usec): min=41144, max=42193, avg=41912.09, stdev=196.50 00:23:08.845 lat (usec): min=41154, max=42217, avg=41947.65, stdev=201.16 00:23:08.845 clat percentiles (usec): 00:23:08.845 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:08.845 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:08.845 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:08.845 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:08.845 | 99.99th=[42206] 00:23:08.845 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:23:08.845 slat (usec): min=6, max=29486, avg=70.32, stdev=1302.60 00:23:08.845 clat (usec): min=158, max=980, avg=223.76, stdev=69.70 00:23:08.845 lat (usec): min=168, max=30370, avg=294.07, stdev=1333.65 00:23:08.845 clat percentiles (usec): 00:23:08.845 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 194], 00:23:08.845 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 00:23:08.845 | 70.00th=[ 210], 80.00th=[ 241], 90.00th=[ 314], 95.00th=[ 330], 00:23:08.845 | 99.00th=[ 437], 99.50th=[ 725], 99.90th=[ 979], 99.95th=[ 979], 00:23:08.845 | 99.99th=[ 979] 00:23:08.845 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:23:08.845 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:08.845 lat (usec) : 250=83.30%, 500=12.01%, 750=0.38%, 1000=0.38% 00:23:08.845 lat (msec) : 50=3.94% 00:23:08.845 cpu : usr=0.29%, sys=0.97%, ctx=536, majf=0, minf=1 00:23:08.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:08.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.845 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:08.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:08.845 00:23:08.845 Run status group 0 (all jobs): 00:23:08.845 READ: bw=81.3KiB/s (83.3kB/s), 81.3KiB/s-81.3KiB/s (83.3kB/s-83.3kB/s), io=84.0KiB (86.0kB), run=1033-1033msec 00:23:08.845 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:23:08.845 00:23:08.845 Disk stats (read/write): 00:23:08.845 nvme0n1: ios=42/512, merge=0/0, ticks=1682/108, in_queue=1790, util=99.00% 00:23:08.845 16:22:07 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:09.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:09.105 16:22:07 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:09.105 16:22:07 -- common/autotest_common.sh@1198 -- # local i=0 00:23:09.105 16:22:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:09.105 16:22:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:09.105 16:22:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:09.105 16:22:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:09.105 16:22:07 -- common/autotest_common.sh@1210 -- # return 0 00:23:09.105 16:22:07 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:09.105 16:22:07 -- target/nmic.sh@53 -- # nvmftestfini 00:23:09.105 16:22:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:09.105 16:22:07 -- nvmf/common.sh@116 -- # sync 00:23:09.105 16:22:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:09.105 16:22:07 -- nvmf/common.sh@119 -- # set +e 00:23:09.105 16:22:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:09.105 16:22:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:09.105 rmmod nvme_tcp 00:23:09.105 rmmod nvme_fabrics 00:23:09.105 rmmod nvme_keyring 00:23:09.105 16:22:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:09.105 16:22:07 -- nvmf/common.sh@123 -- # set -e 00:23:09.105 16:22:07 -- nvmf/common.sh@124 -- # return 0 00:23:09.105 16:22:07 -- nvmf/common.sh@477 -- # '[' -n 3146440 ']' 00:23:09.105 16:22:07 -- nvmf/common.sh@478 -- # killprocess 3146440 00:23:09.105 16:22:07 -- common/autotest_common.sh@926 -- # '[' -z 3146440 ']' 00:23:09.105 16:22:07 -- common/autotest_common.sh@930 -- # kill -0 3146440 00:23:09.105 16:22:07 -- common/autotest_common.sh@931 -- # uname 00:23:09.105 16:22:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:09.105 16:22:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3146440 00:23:09.105 16:22:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:09.105 16:22:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:09.105 16:22:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3146440' 00:23:09.105 killing process with pid 3146440 00:23:09.105 16:22:08 -- common/autotest_common.sh@945 -- # kill 3146440 00:23:09.105 16:22:08 -- common/autotest_common.sh@950 -- # wait 3146440 00:23:09.677 16:22:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:09.677 16:22:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:09.677 16:22:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:09.677 16:22:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.677 16:22:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:09.677 16:22:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.677 16:22:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.677 16:22:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.216 16:22:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:12.216 00:23:12.216 real 0m15.808s 00:23:12.216 user 0m49.352s 00:23:12.216 sys 0m4.496s 00:23:12.216 16:22:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.216 16:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:12.216 ************************************ 00:23:12.216 END TEST nvmf_nmic 00:23:12.216 ************************************ 00:23:12.216 16:22:10 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:12.216 16:22:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:12.216 16:22:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:12.216 16:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:12.216 ************************************ 00:23:12.216 START TEST nvmf_fio_target 00:23:12.216 ************************************ 00:23:12.216 16:22:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:12.217 * Looking for test storage... 00:23:12.217 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:12.217 16:22:10 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.217 16:22:10 -- nvmf/common.sh@7 -- # uname -s 00:23:12.217 16:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.217 16:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.217 16:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.217 16:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.217 16:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.217 16:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.217 16:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.217 16:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.217 16:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.217 16:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.217 16:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:12.217 16:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:12.217 16:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.217 16:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.217 16:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:12.217 16:22:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:12.217 16:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.217 16:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.217 16:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.217 16:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.217 16:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.217 16:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.217 16:22:10 -- paths/export.sh@5 -- # export PATH 00:23:12.217 16:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.217 16:22:10 -- nvmf/common.sh@46 -- # : 0 00:23:12.217 16:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:12.217 16:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:12.217 16:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:12.217 16:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.217 16:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.217 16:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:12.217 16:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:12.217 16:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:12.217 16:22:10 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.217 16:22:10 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.217 16:22:10 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:12.217 16:22:10 -- target/fio.sh@16 -- # nvmftestinit 00:23:12.217 16:22:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:12.217 16:22:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.217 16:22:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:12.217 16:22:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:12.217 16:22:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:12.217 16:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.217 16:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.217 16:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.217 16:22:10 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:12.217 16:22:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:12.217 16:22:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:12.217 16:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:18.798 16:22:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:18.799 16:22:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:18.799 16:22:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:18.799 16:22:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:18.799 16:22:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:18.799 16:22:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:18.799 16:22:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:18.799 16:22:16 -- nvmf/common.sh@294 -- # net_devs=() 00:23:18.799 16:22:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:18.799 16:22:16 -- nvmf/common.sh@295 -- # e810=() 00:23:18.799 16:22:16 -- nvmf/common.sh@295 -- # local -ga e810 00:23:18.799 16:22:16 -- nvmf/common.sh@296 -- # x722=() 00:23:18.799 16:22:16 -- nvmf/common.sh@296 -- # local -ga x722 00:23:18.799 16:22:16 -- nvmf/common.sh@297 -- # mlx=() 00:23:18.799 16:22:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:18.799 16:22:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.799 16:22:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:18.799 16:22:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:18.799 16:22:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:18.799 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:18.799 16:22:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:18.799 16:22:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:18.799 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:18.799 16:22:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:18.799 16:22:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.799 16:22:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.799 16:22:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:18.799 Found net devices under 0000:27:00.0: cvl_0_0 00:23:18.799 16:22:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.799 16:22:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:18.799 16:22:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.799 16:22:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.799 16:22:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:18.799 Found net devices under 0000:27:00.1: cvl_0_1 00:23:18.799 16:22:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.799 16:22:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:18.799 16:22:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:18.799 16:22:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.799 16:22:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.799 16:22:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.799 16:22:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:18.799 16:22:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.799 16:22:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.799 16:22:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:18.799 16:22:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.799 16:22:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.799 16:22:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:18.799 16:22:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:18.799 16:22:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.799 16:22:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.799 16:22:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.799 16:22:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.799 16:22:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:18.799 16:22:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.799 16:22:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.799 16:22:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.799 16:22:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:18.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:23:18.799 00:23:18.799 --- 10.0.0.2 ping statistics --- 00:23:18.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.799 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:23:18.799 16:22:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:23:18.799 00:23:18.799 --- 10.0.0.1 ping statistics --- 00:23:18.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.799 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:23:18.799 16:22:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.799 16:22:16 -- nvmf/common.sh@410 -- # return 0 00:23:18.799 16:22:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:18.799 16:22:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.799 16:22:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:18.799 16:22:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.799 16:22:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:18.799 16:22:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:18.799 16:22:16 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:18.799 16:22:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:18.799 16:22:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:18.799 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:18.799 16:22:16 -- nvmf/common.sh@469 -- # nvmfpid=3152315 00:23:18.799 16:22:16 -- nvmf/common.sh@470 -- # waitforlisten 3152315 00:23:18.799 16:22:16 -- common/autotest_common.sh@819 -- # '[' -z 3152315 ']' 00:23:18.799 16:22:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.799 16:22:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:18.799 16:22:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.799 16:22:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:18.799 16:22:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:18.799 16:22:16 -- common/autotest_common.sh@10 -- # set +x 00:23:18.799 [2024-04-23 16:22:16.951403] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:18.799 [2024-04-23 16:22:16.951513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.799 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.799 [2024-04-23 16:22:17.075421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.799 [2024-04-23 16:22:17.175267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:18.799 [2024-04-23 16:22:17.175446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.799 [2024-04-23 16:22:17.175461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.799 [2024-04-23 16:22:17.175470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.799 [2024-04-23 16:22:17.175546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.799 [2024-04-23 16:22:17.175662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.799 [2024-04-23 16:22:17.175692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.799 [2024-04-23 16:22:17.175701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.799 16:22:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:18.799 16:22:17 -- common/autotest_common.sh@852 -- # return 0 00:23:18.799 16:22:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:18.799 16:22:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:18.799 16:22:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.799 16:22:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.799 16:22:17 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:19.059 [2024-04-23 16:22:17.835813] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.059 16:22:17 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:19.320 16:22:18 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:19.320 16:22:18 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:19.320 16:22:18 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:19.320 16:22:18 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:19.581 16:22:18 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:19.581 16:22:18 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:19.840 16:22:18 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:19.840 16:22:18 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:19.840 16:22:18 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:20.098 16:22:18 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:20.098 16:22:18 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:20.357 16:22:19 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:20.357 16:22:19 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:20.357 16:22:19 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:20.357 16:22:19 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:20.617 16:22:19 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:20.617 16:22:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:20.617 16:22:19 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.879 16:22:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:20.879 16:22:19 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:20.879 16:22:19 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.141 [2024-04-23 16:22:19.898352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.141 16:22:19 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:21.141 16:22:20 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:21.402 16:22:20 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:23.309 16:22:21 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:23.309 16:22:21 -- common/autotest_common.sh@1177 -- # local i=0 00:23:23.309 16:22:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:23.309 16:22:21 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:23:23.309 16:22:21 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:23:23.309 16:22:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:25.215 16:22:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:25.215 16:22:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:25.215 16:22:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:25.215 16:22:23 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:23:25.215 16:22:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:25.215 16:22:23 -- common/autotest_common.sh@1187 -- # return 0 00:23:25.215 16:22:23 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:25.215 [global] 00:23:25.215 thread=1 00:23:25.215 invalidate=1 00:23:25.215 rw=write 00:23:25.215 time_based=1 00:23:25.215 runtime=1 00:23:25.215 ioengine=libaio 00:23:25.215 direct=1 00:23:25.215 bs=4096 00:23:25.215 iodepth=1 00:23:25.215 norandommap=0 00:23:25.215 numjobs=1 00:23:25.215 00:23:25.215 verify_dump=1 00:23:25.215 verify_backlog=512 00:23:25.215 verify_state_save=0 00:23:25.215 do_verify=1 00:23:25.215 verify=crc32c-intel 00:23:25.215 [job0] 00:23:25.215 filename=/dev/nvme0n1 00:23:25.215 [job1] 00:23:25.215 filename=/dev/nvme0n2 00:23:25.215 [job2] 00:23:25.215 filename=/dev/nvme0n3 00:23:25.215 [job3] 00:23:25.215 filename=/dev/nvme0n4 00:23:25.215 Could not set queue depth (nvme0n1) 00:23:25.215 Could not set queue depth (nvme0n2) 00:23:25.215 Could not set queue depth (nvme0n3) 00:23:25.215 Could not set queue depth (nvme0n4) 00:23:25.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:25.472 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:25.472 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:25.472 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:25.472 fio-3.35 00:23:25.472 Starting 4 threads 00:23:26.880 00:23:26.880 job0: (groupid=0, jobs=1): err= 0: pid=3153885: Tue Apr 23 16:22:25 2024 00:23:26.880 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:26.880 slat (nsec): min=3018, max=18826, avg=3772.96, stdev=705.99 00:23:26.880 clat (usec): min=210, max=529, avg=348.98, stdev=70.78 00:23:26.880 lat (usec): min=213, max=532, avg=352.76, stdev=70.83 00:23:26.880 clat percentiles (usec): 00:23:26.880 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 281], 00:23:26.880 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 347], 60.00th=[ 375], 00:23:26.880 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 469], 00:23:26.880 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 529], 99.95th=[ 529], 00:23:26.880 | 99.99th=[ 529] 00:23:26.880 write: IOPS=1947, BW=7788KiB/s (7975kB/s)(7796KiB/1001msec); 0 zone resets 00:23:26.880 slat (nsec): min=4396, max=56270, avg=7988.61, stdev=5930.30 00:23:26.880 clat (usec): min=144, max=782, avg=224.04, stdev=47.78 00:23:26.880 lat (usec): min=149, max=828, avg=232.03, stdev=50.97 00:23:26.880 clat percentiles (usec): 00:23:26.880 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 192], 00:23:26.880 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:23:26.880 | 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 289], 95.00th=[ 306], 00:23:26.880 | 99.00th=[ 388], 99.50th=[ 424], 99.90th=[ 611], 99.95th=[ 783], 00:23:26.880 | 99.99th=[ 783] 00:23:26.880 bw ( KiB/s): min= 8192, max= 8192, per=38.51%, avg=8192.00, stdev= 0.00, samples=1 00:23:26.880 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:26.880 lat (usec) : 250=48.46%, 500=50.65%, 750=0.86%, 1000=0.03% 00:23:26.880 cpu : usr=1.20%, sys=2.60%, ctx=3488, majf=0, minf=1 00:23:26.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 issued rwts: total=1536,1949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:26.881 job1: (groupid=0, jobs=1): err= 0: pid=3153898: Tue Apr 23 16:22:25 2024 00:23:26.881 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:23:26.881 slat (nsec): min=4219, max=28100, avg=6028.48, stdev=1077.34 00:23:26.881 clat (usec): min=258, max=528, avg=330.08, stdev=33.54 00:23:26.881 lat (usec): min=263, max=533, avg=336.11, stdev=33.54 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:23:26.881 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:23:26.881 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 375], 00:23:26.881 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 519], 99.95th=[ 529], 00:23:26.881 | 99.99th=[ 529] 00:23:26.881 write: IOPS=1964, BW=7856KiB/s (8045kB/s)(7864KiB/1001msec); 0 zone resets 00:23:26.881 slat (nsec): min=4787, max=57420, avg=10183.45, stdev=5964.27 00:23:26.881 clat (usec): min=167, max=1459, avg=231.13, stdev=58.99 00:23:26.881 lat (usec): min=175, max=1467, avg=241.32, stdev=62.48 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:23:26.881 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:23:26.881 | 70.00th=[ 233], 80.00th=[ 249], 90.00th=[ 297], 95.00th=[ 351], 00:23:26.881 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 701], 99.95th=[ 1467], 00:23:26.881 | 99.99th=[ 1467] 00:23:26.881 bw ( KiB/s): min= 8192, max= 8192, per=38.51%, avg=8192.00, stdev= 0.00, samples=1 00:23:26.881 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:26.881 lat (usec) : 250=45.09%, 500=54.63%, 750=0.26% 00:23:26.881 lat (msec) : 2=0.03% 00:23:26.881 cpu : usr=2.00%, sys=4.20%, ctx=3504, majf=0, minf=1 00:23:26.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 issued rwts: total=1536,1966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:26.881 job2: (groupid=0, jobs=1): err= 0: pid=3153911: Tue Apr 23 16:22:25 2024 00:23:26.881 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:23:26.881 slat (nsec): min=7939, max=35910, avg=30009.77, stdev=5073.01 00:23:26.881 clat (usec): min=40845, max=41971, avg=41157.90, stdev=388.11 00:23:26.881 lat (usec): min=40875, max=42002, avg=41187.91, stdev=387.75 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:23:26.881 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:26.881 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:23:26.881 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:26.881 | 99.99th=[42206] 00:23:26.881 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:23:26.881 slat (nsec): min=5493, max=63480, avg=7676.20, stdev=3092.70 00:23:26.881 clat (usec): min=169, max=823, avg=221.67, stdev=47.57 00:23:26.881 lat (usec): min=176, max=886, avg=229.34, stdev=49.24 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 198], 00:23:26.881 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:23:26.881 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:23:26.881 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 824], 99.95th=[ 824], 00:23:26.881 | 99.99th=[ 824] 00:23:26.881 bw ( KiB/s): min= 4096, max= 4096, per=19.26%, avg=4096.00, stdev= 0.00, samples=1 00:23:26.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:26.881 lat (usec) : 250=86.52%, 500=8.99%, 750=0.19%, 1000=0.19% 00:23:26.881 lat (msec) : 50=4.12% 00:23:26.881 cpu : usr=0.10%, sys=0.49%, ctx=534, majf=0, minf=1 00:23:26.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:26.881 job3: (groupid=0, jobs=1): err= 0: pid=3153919: Tue Apr 23 16:22:25 2024 00:23:26.881 read: IOPS=513, BW=2055KiB/s (2104kB/s)(2104KiB/1024msec) 00:23:26.881 slat (nsec): min=5006, max=32680, avg=7772.35, stdev=3807.30 00:23:26.881 clat (usec): min=323, max=42064, avg=1484.87, stdev=6695.80 00:23:26.881 lat (usec): min=330, max=42092, avg=1492.65, stdev=6699.35 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:23:26.881 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 388], 00:23:26.881 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 420], 00:23:26.881 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:26.881 | 99.99th=[42206] 00:23:26.881 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:23:26.881 slat (nsec): min=5380, max=65845, avg=8370.96, stdev=2294.37 00:23:26.881 clat (usec): min=168, max=892, avg=221.24, stdev=46.68 00:23:26.881 lat (usec): min=175, max=957, avg=229.61, stdev=47.64 00:23:26.881 clat percentiles (usec): 00:23:26.881 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 196], 00:23:26.881 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:23:26.881 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 258], 95.00th=[ 277], 00:23:26.881 | 99.00th=[ 334], 99.50th=[ 603], 99.90th=[ 685], 99.95th=[ 889], 00:23:26.881 | 99.99th=[ 889] 00:23:26.881 bw ( KiB/s): min= 8192, max= 8192, per=38.51%, avg=8192.00, stdev= 0.00, samples=1 00:23:26.881 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:23:26.881 lat (usec) : 250=57.68%, 500=41.03%, 750=0.32%, 1000=0.06% 00:23:26.881 lat (msec) : 50=0.90% 00:23:26.881 cpu : usr=0.59%, sys=1.27%, ctx=1550, majf=0, minf=1 00:23:26.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:26.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:26.881 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:26.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:26.881 00:23:26.881 Run status group 0 (all jobs): 00:23:26.881 READ: bw=13.8MiB/s (14.5MB/s), 85.9KiB/s-6138KiB/s (87.9kB/s-6285kB/s), io=14.1MiB (14.8MB), run=1001-1025msec 00:23:26.881 WRITE: bw=20.8MiB/s (21.8MB/s), 1998KiB/s-7856KiB/s (2046kB/s-8045kB/s), io=21.3MiB (22.3MB), run=1001-1025msec 00:23:26.881 00:23:26.881 Disk stats (read/write): 00:23:26.881 nvme0n1: ios=1340/1536, merge=0/0, ticks=1301/324, in_queue=1625, util=83.87% 00:23:26.881 nvme0n2: ios=1479/1536, merge=0/0, ticks=672/312, in_queue=984, util=87.92% 00:23:26.881 nvme0n3: ios=74/512, merge=0/0, ticks=791/110, in_queue=901, util=93.84% 00:23:26.881 nvme0n4: ios=578/1024, merge=0/0, ticks=645/226, in_queue=871, util=95.81% 00:23:26.881 16:22:25 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:26.881 [global] 00:23:26.881 thread=1 00:23:26.881 invalidate=1 00:23:26.881 rw=randwrite 00:23:26.881 time_based=1 00:23:26.881 runtime=1 00:23:26.881 ioengine=libaio 00:23:26.881 direct=1 00:23:26.881 bs=4096 00:23:26.881 iodepth=1 00:23:26.881 norandommap=0 00:23:26.881 numjobs=1 00:23:26.881 00:23:26.881 verify_dump=1 00:23:26.881 verify_backlog=512 00:23:26.881 verify_state_save=0 00:23:26.881 do_verify=1 00:23:26.881 verify=crc32c-intel 00:23:26.881 [job0] 00:23:26.881 filename=/dev/nvme0n1 00:23:26.881 [job1] 00:23:26.881 filename=/dev/nvme0n2 00:23:26.881 [job2] 00:23:26.881 filename=/dev/nvme0n3 00:23:26.881 [job3] 00:23:26.881 filename=/dev/nvme0n4 00:23:26.881 Could not set queue depth (nvme0n1) 00:23:26.881 Could not set queue depth (nvme0n2) 00:23:26.881 Could not set queue depth (nvme0n3) 00:23:26.881 Could not set queue depth (nvme0n4) 00:23:27.225 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:27.225 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:27.225 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:27.225 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:27.225 fio-3.35 00:23:27.225 Starting 4 threads 00:23:28.175 00:23:28.175 job0: (groupid=0, jobs=1): err= 0: pid=3154428: Tue Apr 23 16:22:27 2024 00:23:28.175 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:23:28.175 slat (nsec): min=7366, max=33815, avg=27954.73, stdev=6811.64 00:23:28.175 clat (usec): min=1031, max=42044, avg=40066.91, stdev=8719.36 00:23:28.175 lat (usec): min=1055, max=42074, avg=40094.86, stdev=8720.07 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[ 1029], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:28.175 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:23:28.175 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:28.175 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:28.175 | 99.99th=[42206] 00:23:28.175 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:23:28.175 slat (usec): min=4, max=206, avg=16.91, stdev=12.33 00:23:28.175 clat (usec): min=156, max=908, avg=280.31, stdev=73.88 00:23:28.175 lat (usec): min=163, max=1115, avg=297.22, stdev=82.54 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 212], 00:23:28.175 | 30.00th=[ 233], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:23:28.175 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 375], 00:23:28.175 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 906], 99.95th=[ 906], 00:23:28.175 | 99.99th=[ 906] 00:23:28.175 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:23:28.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:28.175 lat (usec) : 250=35.02%, 500=59.18%, 750=1.31%, 1000=0.37% 00:23:28.175 lat (msec) : 2=0.19%, 50=3.93% 00:23:28.175 cpu : usr=0.48%, sys=0.77%, ctx=535, majf=0, minf=1 00:23:28.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:28.175 job1: (groupid=0, jobs=1): err= 0: pid=3154442: Tue Apr 23 16:22:27 2024 00:23:28.175 read: IOPS=19, BW=79.8KiB/s (81.8kB/s)(80.0KiB/1002msec) 00:23:28.175 slat (nsec): min=24416, max=39775, avg=33900.10, stdev=3050.44 00:23:28.175 clat (usec): min=41775, max=42097, avg=41949.31, stdev=80.50 00:23:28.175 lat (usec): min=41809, max=42134, avg=41983.21, stdev=80.08 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:28.175 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:28.175 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:28.175 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:28.175 | 99.99th=[42206] 00:23:28.175 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:23:28.175 slat (nsec): min=5117, max=45356, avg=17927.50, stdev=10131.85 00:23:28.175 clat (usec): min=196, max=636, avg=293.72, stdev=67.44 00:23:28.175 lat (usec): min=202, max=681, avg=311.64, stdev=71.73 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 231], 00:23:28.175 | 30.00th=[ 247], 40.00th=[ 277], 50.00th=[ 297], 60.00th=[ 306], 00:23:28.175 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 400], 00:23:28.175 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 635], 00:23:28.175 | 99.99th=[ 635] 00:23:28.175 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:23:28.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:28.175 lat (usec) : 250=29.89%, 500=64.85%, 750=1.50% 00:23:28.175 lat (msec) : 50=3.76% 00:23:28.175 cpu : usr=0.50%, sys=1.20%, ctx=533, majf=0, minf=1 00:23:28.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:28.175 job2: (groupid=0, jobs=1): err= 0: pid=3154467: Tue Apr 23 16:22:27 2024 00:23:28.175 read: IOPS=20, BW=82.8KiB/s (84.7kB/s)(84.0KiB/1015msec) 00:23:28.175 slat (nsec): min=5461, max=38095, avg=32196.95, stdev=6357.36 00:23:28.175 clat (usec): min=41191, max=41982, avg=41918.84, stdev=167.40 00:23:28.175 lat (usec): min=41196, max=42014, avg=41951.03, stdev=173.48 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:23:28.175 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:28.175 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:28.175 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:28.175 | 99.99th=[42206] 00:23:28.175 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:23:28.175 slat (nsec): min=4477, max=60625, avg=10211.09, stdev=7867.02 00:23:28.175 clat (usec): min=153, max=993, avg=248.09, stdev=87.15 00:23:28.175 lat (usec): min=159, max=1054, avg=258.30, stdev=90.38 00:23:28.175 clat percentiles (usec): 00:23:28.175 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:23:28.175 | 30.00th=[ 194], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 229], 00:23:28.175 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 371], 95.00th=[ 424], 00:23:28.175 | 99.00th=[ 510], 99.50th=[ 537], 99.90th=[ 996], 99.95th=[ 996], 00:23:28.175 | 99.99th=[ 996] 00:23:28.175 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:23:28.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:28.175 lat (usec) : 250=64.35%, 500=30.39%, 750=1.13%, 1000=0.19% 00:23:28.175 lat (msec) : 50=3.94% 00:23:28.175 cpu : usr=0.20%, sys=0.79%, ctx=534, majf=0, minf=1 00:23:28.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.175 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:28.175 job3: (groupid=0, jobs=1): err= 0: pid=3154477: Tue Apr 23 16:22:27 2024 00:23:28.175 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:23:28.175 slat (nsec): min=7448, max=38260, avg=32860.57, stdev=6052.58 00:23:28.175 clat (usec): min=40967, max=41991, avg=41715.57, stdev=395.90 00:23:28.176 lat (usec): min=41000, max=42024, avg=41748.43, stdev=398.04 00:23:28.176 clat percentiles (usec): 00:23:28.176 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:23:28.176 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:28.176 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:28.176 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:28.176 | 99.99th=[42206] 00:23:28.176 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:23:28.176 slat (nsec): min=4775, max=48660, avg=16902.70, stdev=9902.82 00:23:28.176 clat (usec): min=178, max=581, avg=282.66, stdev=74.61 00:23:28.176 lat (usec): min=184, max=630, avg=299.56, stdev=79.88 00:23:28.176 clat percentiles (usec): 00:23:28.176 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 212], 00:23:28.176 | 30.00th=[ 223], 40.00th=[ 243], 50.00th=[ 281], 60.00th=[ 297], 00:23:28.176 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 420], 00:23:28.176 | 99.00th=[ 506], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 578], 00:23:28.176 | 99.99th=[ 578] 00:23:28.176 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:23:28.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:28.176 lat (usec) : 250=40.34%, 500=54.60%, 750=1.13% 00:23:28.176 lat (msec) : 50=3.94% 00:23:28.176 cpu : usr=0.39%, sys=1.26%, ctx=534, majf=0, minf=1 00:23:28.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.176 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:28.176 00:23:28.176 Run status group 0 (all jobs): 00:23:28.176 READ: bw=324KiB/s (332kB/s), 79.8KiB/s-84.9KiB/s (81.8kB/s-86.9kB/s), io=336KiB (344kB), run=1002-1037msec 00:23:28.176 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2044KiB/s (2022kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1037msec 00:23:28.176 00:23:28.176 Disk stats (read/write): 00:23:28.176 nvme0n1: ios=67/512, merge=0/0, ticks=702/139, in_queue=841, util=86.17% 00:23:28.176 nvme0n2: ios=47/512, merge=0/0, ticks=984/121, in_queue=1105, util=98.37% 00:23:28.176 nvme0n3: ios=41/512, merge=0/0, ticks=1556/124, in_queue=1680, util=93.16% 00:23:28.176 nvme0n4: ios=74/512, merge=0/0, ticks=972/123, in_queue=1095, util=97.02% 00:23:28.176 16:22:27 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:28.447 [global] 00:23:28.447 thread=1 00:23:28.447 invalidate=1 00:23:28.447 rw=write 00:23:28.447 time_based=1 00:23:28.447 runtime=1 00:23:28.447 ioengine=libaio 00:23:28.447 direct=1 00:23:28.447 bs=4096 00:23:28.447 iodepth=128 00:23:28.447 norandommap=0 00:23:28.447 numjobs=1 00:23:28.447 00:23:28.447 verify_dump=1 00:23:28.447 verify_backlog=512 00:23:28.447 verify_state_save=0 00:23:28.447 do_verify=1 00:23:28.447 verify=crc32c-intel 00:23:28.447 [job0] 00:23:28.447 filename=/dev/nvme0n1 00:23:28.447 [job1] 00:23:28.447 filename=/dev/nvme0n2 00:23:28.447 [job2] 00:23:28.447 filename=/dev/nvme0n3 00:23:28.447 [job3] 00:23:28.447 filename=/dev/nvme0n4 00:23:28.447 Could not set queue depth (nvme0n1) 00:23:28.447 Could not set queue depth (nvme0n2) 00:23:28.447 Could not set queue depth (nvme0n3) 00:23:28.447 Could not set queue depth (nvme0n4) 00:23:28.709 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:28.709 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:28.709 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:28.709 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:28.709 fio-3.35 00:23:28.709 Starting 4 threads 00:23:30.106 00:23:30.106 job0: (groupid=0, jobs=1): err= 0: pid=3154982: Tue Apr 23 16:22:28 2024 00:23:30.106 read: IOPS=5525, BW=21.6MiB/s (22.6MB/s)(21.6MiB/1003msec) 00:23:30.106 slat (nsec): min=859, max=15045k, avg=95778.11, stdev=722407.13 00:23:30.106 clat (usec): min=810, max=45552, avg=12319.11, stdev=5080.39 00:23:30.106 lat (usec): min=4105, max=45557, avg=12414.88, stdev=5115.93 00:23:30.106 clat percentiles (usec): 00:23:30.106 | 1.00th=[ 4555], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9634], 00:23:30.106 | 30.00th=[10159], 40.00th=[10421], 50.00th=[11076], 60.00th=[11469], 00:23:30.106 | 70.00th=[12649], 80.00th=[14091], 90.00th=[17171], 95.00th=[18744], 00:23:30.106 | 99.00th=[41681], 99.50th=[44303], 99.90th=[44827], 99.95th=[45351], 00:23:30.106 | 99.99th=[45351] 00:23:30.106 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:23:30.106 slat (nsec): min=1792, max=10562k, avg=78367.96, stdev=566202.35 00:23:30.106 clat (usec): min=1403, max=23682, avg=10410.17, stdev=3250.91 00:23:30.106 lat (usec): min=1412, max=23688, avg=10488.54, stdev=3266.85 00:23:30.106 clat percentiles (usec): 00:23:30.106 | 1.00th=[ 3720], 5.00th=[ 5604], 10.00th=[ 6652], 20.00th=[ 7111], 00:23:30.106 | 30.00th=[ 8094], 40.00th=[ 9503], 50.00th=[11207], 60.00th=[11469], 00:23:30.106 | 70.00th=[11863], 80.00th=[12649], 90.00th=[15008], 95.00th=[16188], 00:23:30.106 | 99.00th=[17695], 99.50th=[19792], 99.90th=[22938], 99.95th=[23725], 00:23:30.106 | 99.99th=[23725] 00:23:30.106 bw ( KiB/s): min=20480, max=24576, per=26.35%, avg=22528.00, stdev=2896.31, samples=2 00:23:30.106 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:23:30.106 lat (usec) : 1000=0.01% 00:23:30.106 lat (msec) : 2=0.03%, 4=0.52%, 10=33.52%, 20=63.84%, 50=2.09% 00:23:30.106 cpu : usr=3.49%, sys=5.89%, ctx=441, majf=0, minf=1 00:23:30.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:30.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.106 issued rwts: total=5542,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.106 job1: (groupid=0, jobs=1): err= 0: pid=3154983: Tue Apr 23 16:22:28 2024 00:23:30.106 read: IOPS=5506, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1004msec) 00:23:30.106 slat (nsec): min=899, max=9893.6k, avg=91155.85, stdev=645261.70 00:23:30.106 clat (usec): min=2773, max=21100, avg=11771.14, stdev=3057.60 00:23:30.106 lat (usec): min=3897, max=21106, avg=11862.30, stdev=3065.71 00:23:30.106 clat percentiles (usec): 00:23:30.106 | 1.00th=[ 5407], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 9241], 00:23:30.106 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[12256], 00:23:30.106 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15926], 95.00th=[17957], 00:23:30.106 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20579], 99.95th=[21103], 00:23:30.106 | 99.99th=[21103] 00:23:30.106 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:23:30.106 slat (nsec): min=1837, max=22863k, avg=83419.92, stdev=632582.04 00:23:30.106 clat (usec): min=1957, max=25488, avg=10548.62, stdev=3171.14 00:23:30.106 lat (usec): min=1991, max=25497, avg=10632.04, stdev=3192.33 00:23:30.106 clat percentiles (usec): 00:23:30.106 | 1.00th=[ 3294], 5.00th=[ 5276], 10.00th=[ 6259], 20.00th=[ 7242], 00:23:30.106 | 30.00th=[ 8356], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:23:30.106 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14353], 95.00th=[15401], 00:23:30.106 | 99.00th=[17695], 99.50th=[19530], 99.90th=[21103], 99.95th=[23987], 00:23:30.106 | 99.99th=[25560] 00:23:30.106 bw ( KiB/s): min=22096, max=22960, per=26.35%, avg=22528.00, stdev=610.94, samples=2 00:23:30.106 iops : min= 5524, max= 5740, avg=5632.00, stdev=152.74, samples=2 00:23:30.106 lat (msec) : 2=0.01%, 4=1.16%, 10=31.69%, 20=66.71%, 50=0.44% 00:23:30.107 cpu : usr=4.19%, sys=5.98%, ctx=497, majf=0, minf=1 00:23:30.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:23:30.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.107 issued rwts: total=5529,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.107 job2: (groupid=0, jobs=1): err= 0: pid=3154984: Tue Apr 23 16:22:28 2024 00:23:30.107 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:23:30.107 slat (nsec): min=894, max=6362.0k, avg=101640.77, stdev=627199.72 00:23:30.107 clat (usec): min=1437, max=20706, avg=12604.19, stdev=1950.57 00:23:30.107 lat (usec): min=6218, max=23996, avg=12705.83, stdev=2001.31 00:23:30.107 clat percentiles (usec): 00:23:30.107 | 1.00th=[ 6521], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11600], 00:23:30.107 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:23:30.107 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15139], 95.00th=[16057], 00:23:30.107 | 99.00th=[17695], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:23:30.107 | 99.99th=[20579] 00:23:30.107 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:23:30.107 slat (nsec): min=1575, max=7193.0k, avg=98838.53, stdev=541017.50 00:23:30.107 clat (usec): min=7158, max=21282, avg=13154.91, stdev=1661.72 00:23:30.107 lat (usec): min=7163, max=21354, avg=13253.75, stdev=1720.26 00:23:30.107 clat percentiles (usec): 00:23:30.107 | 1.00th=[ 8291], 5.00th=[10159], 10.00th=[11207], 20.00th=[12387], 00:23:30.107 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:23:30.107 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14615], 95.00th=[16319], 00:23:30.107 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20317], 99.95th=[20579], 00:23:30.107 | 99.99th=[21365] 00:23:30.107 bw ( KiB/s): min=20472, max=20480, per=23.95%, avg=20476.00, stdev= 5.66, samples=2 00:23:30.107 iops : min= 5118, max= 5120, avg=5119.00, stdev= 1.41, samples=2 00:23:30.107 lat (msec) : 2=0.01%, 10=6.46%, 20=93.43%, 50=0.09% 00:23:30.107 cpu : usr=2.19%, sys=3.88%, ctx=604, majf=0, minf=1 00:23:30.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:30.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.107 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.107 job3: (groupid=0, jobs=1): err= 0: pid=3154985: Tue Apr 23 16:22:28 2024 00:23:30.107 read: IOPS=5075, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec) 00:23:30.107 slat (nsec): min=939, max=11902k, avg=102410.81, stdev=742291.84 00:23:30.107 clat (usec): min=2134, max=24337, avg=13033.19, stdev=3535.09 00:23:30.107 lat (usec): min=3962, max=24386, avg=13135.60, stdev=3568.93 00:23:30.107 clat percentiles (usec): 00:23:30.107 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10290], 00:23:30.107 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12518], 60.00th=[13173], 00:23:30.107 | 70.00th=[14484], 80.00th=[15795], 90.00th=[18220], 95.00th=[20055], 00:23:30.107 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23725], 99.95th=[23725], 00:23:30.107 | 99.99th=[24249] 00:23:30.107 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:23:30.107 slat (nsec): min=1781, max=11246k, avg=88058.05, stdev=545067.55 00:23:30.107 clat (usec): min=2900, max=23737, avg=11893.80, stdev=3362.81 00:23:30.107 lat (usec): min=2908, max=23744, avg=11981.86, stdev=3374.19 00:23:30.107 clat percentiles (usec): 00:23:30.107 | 1.00th=[ 3785], 5.00th=[ 6063], 10.00th=[ 7373], 20.00th=[ 8455], 00:23:30.107 | 30.00th=[10421], 40.00th=[12125], 50.00th=[12780], 60.00th=[13042], 00:23:30.107 | 70.00th=[13304], 80.00th=[13566], 90.00th=[15139], 95.00th=[17957], 00:23:30.107 | 99.00th=[20055], 99.50th=[21890], 99.90th=[23725], 99.95th=[23725], 00:23:30.107 | 99.99th=[23725] 00:23:30.107 bw ( KiB/s): min=20480, max=20480, per=23.95%, avg=20480.00, stdev= 0.00, samples=2 00:23:30.107 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:23:30.107 lat (msec) : 4=0.65%, 10=22.50%, 20=73.36%, 50=3.49% 00:23:30.107 cpu : usr=3.38%, sys=6.17%, ctx=514, majf=0, minf=1 00:23:30.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:30.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.107 issued rwts: total=5106,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.107 00:23:30.107 Run status group 0 (all jobs): 00:23:30.107 READ: bw=81.2MiB/s (85.1MB/s), 18.4MiB/s-21.6MiB/s (19.3MB/s-22.6MB/s), io=81.7MiB (85.7MB), run=1003-1006msec 00:23:30.107 WRITE: bw=83.5MiB/s (87.6MB/s), 19.9MiB/s-21.9MiB/s (20.8MB/s-23.0MB/s), io=84.0MiB (88.1MB), run=1003-1006msec 00:23:30.107 00:23:30.107 Disk stats (read/write): 00:23:30.107 nvme0n1: ios=4630/5034, merge=0/0, ticks=53389/50253, in_queue=103642, util=97.19% 00:23:30.107 nvme0n2: ios=4619/4667, merge=0/0, ticks=53278/47369, in_queue=100647, util=97.55% 00:23:30.107 nvme0n3: ios=4107/4106, merge=0/0, ticks=26504/25760, in_queue=52264, util=89.89% 00:23:30.107 nvme0n4: ios=4135/4383, merge=0/0, ticks=53392/50248, in_queue=103640, util=96.49% 00:23:30.107 16:22:28 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:30.107 [global] 00:23:30.107 thread=1 00:23:30.107 invalidate=1 00:23:30.107 rw=randwrite 00:23:30.107 time_based=1 00:23:30.107 runtime=1 00:23:30.107 ioengine=libaio 00:23:30.107 direct=1 00:23:30.107 bs=4096 00:23:30.107 iodepth=128 00:23:30.107 norandommap=0 00:23:30.107 numjobs=1 00:23:30.107 00:23:30.107 verify_dump=1 00:23:30.107 verify_backlog=512 00:23:30.107 verify_state_save=0 00:23:30.107 do_verify=1 00:23:30.107 verify=crc32c-intel 00:23:30.107 [job0] 00:23:30.107 filename=/dev/nvme0n1 00:23:30.107 [job1] 00:23:30.107 filename=/dev/nvme0n2 00:23:30.107 [job2] 00:23:30.107 filename=/dev/nvme0n3 00:23:30.107 [job3] 00:23:30.107 filename=/dev/nvme0n4 00:23:30.107 Could not set queue depth (nvme0n1) 00:23:30.107 Could not set queue depth (nvme0n2) 00:23:30.107 Could not set queue depth (nvme0n3) 00:23:30.107 Could not set queue depth (nvme0n4) 00:23:30.370 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:30.370 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:30.370 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:30.370 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:30.370 fio-3.35 00:23:30.370 Starting 4 threads 00:23:31.754 00:23:31.754 job0: (groupid=0, jobs=1): err= 0: pid=3155462: Tue Apr 23 16:22:30 2024 00:23:31.754 read: IOPS=2048, BW=8194KiB/s (8391kB/s)(8276KiB/1010msec) 00:23:31.754 slat (nsec): min=942, max=26520k, avg=157341.78, stdev=1123338.38 00:23:31.754 clat (usec): min=6804, max=62710, avg=17379.45, stdev=10060.47 00:23:31.754 lat (usec): min=7282, max=62747, avg=17536.79, stdev=10167.70 00:23:31.754 clat percentiles (usec): 00:23:31.754 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[11338], 20.00th=[11731], 00:23:31.754 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:23:31.754 | 70.00th=[15533], 80.00th=[25560], 90.00th=[35390], 95.00th=[44827], 00:23:31.754 | 99.00th=[45876], 99.50th=[45876], 99.90th=[58983], 99.95th=[58983], 00:23:31.754 | 99.99th=[62653] 00:23:31.754 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:23:31.754 slat (nsec): min=1521, max=32576k, avg=259619.63, stdev=1381888.98 00:23:31.754 clat (usec): min=8768, max=97411, avg=35734.38, stdev=22095.92 00:23:31.754 lat (usec): min=8774, max=97420, avg=35994.00, stdev=22245.09 00:23:31.754 clat percentiles (usec): 00:23:31.754 | 1.00th=[11994], 5.00th=[15008], 10.00th=[18220], 20.00th=[19530], 00:23:31.754 | 30.00th=[20579], 40.00th=[21627], 50.00th=[27132], 60.00th=[33424], 00:23:31.754 | 70.00th=[40109], 80.00th=[48497], 90.00th=[74974], 95.00th=[89654], 00:23:31.754 | 99.00th=[95945], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:23:31.754 | 99.99th=[96994] 00:23:31.754 bw ( KiB/s): min= 8112, max=11512, per=13.01%, avg=9812.00, stdev=2404.16, samples=2 00:23:31.754 iops : min= 2028, max= 2878, avg=2453.00, stdev=601.04, samples=2 00:23:31.754 lat (msec) : 10=2.12%, 20=46.96%, 50=40.81%, 100=10.11% 00:23:31.754 cpu : usr=1.78%, sys=1.49%, ctx=350, majf=0, minf=1 00:23:31.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:31.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.754 issued rwts: total=2069,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.754 job1: (groupid=0, jobs=1): err= 0: pid=3155463: Tue Apr 23 16:22:30 2024 00:23:31.754 read: IOPS=4370, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1005msec) 00:23:31.754 slat (nsec): min=914, max=20315k, avg=100312.44, stdev=743599.03 00:23:31.754 clat (usec): min=2634, max=31950, avg=12984.90, stdev=4831.48 00:23:31.754 lat (usec): min=5177, max=31997, avg=13085.21, stdev=4863.75 00:23:31.754 clat percentiles (usec): 00:23:31.754 | 1.00th=[ 7046], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9372], 00:23:31.754 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11207], 60.00th=[12649], 00:23:31.754 | 70.00th=[13960], 80.00th=[16188], 90.00th=[19268], 95.00th=[23987], 00:23:31.754 | 99.00th=[29492], 99.50th=[30278], 99.90th=[31065], 99.95th=[31065], 00:23:31.754 | 99.99th=[31851] 00:23:31.754 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:23:31.754 slat (nsec): min=1632, max=17447k, avg=114649.14, stdev=648586.61 00:23:31.754 clat (usec): min=2752, max=36462, avg=15184.18, stdev=7832.20 00:23:31.754 lat (usec): min=2758, max=36468, avg=15298.83, stdev=7878.72 00:23:31.754 clat percentiles (usec): 00:23:31.754 | 1.00th=[ 4228], 5.00th=[ 5866], 10.00th=[ 7177], 20.00th=[ 7898], 00:23:31.754 | 30.00th=[ 8717], 40.00th=[10290], 50.00th=[13304], 60.00th=[17171], 00:23:31.754 | 70.00th=[20055], 80.00th=[21627], 90.00th=[26870], 95.00th=[30540], 00:23:31.754 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:23:31.754 | 99.99th=[36439] 00:23:31.754 bw ( KiB/s): min=17008, max=19856, per=24.44%, avg=18432.00, stdev=2013.84, samples=2 00:23:31.754 iops : min= 4252, max= 4964, avg=4608.00, stdev=503.46, samples=2 00:23:31.754 lat (msec) : 4=0.36%, 10=35.36%, 20=44.72%, 50=19.57% 00:23:31.754 cpu : usr=2.39%, sys=3.69%, ctx=418, majf=0, minf=1 00:23:31.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:31.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.754 issued rwts: total=4392,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.755 job2: (groupid=0, jobs=1): err= 0: pid=3155464: Tue Apr 23 16:22:30 2024 00:23:31.755 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:23:31.755 slat (nsec): min=895, max=13333k, avg=75133.49, stdev=639038.88 00:23:31.755 clat (usec): min=3005, max=38730, avg=11221.14, stdev=4141.53 00:23:31.755 lat (usec): min=3012, max=50745, avg=11296.28, stdev=4192.53 00:23:31.755 clat percentiles (usec): 00:23:31.755 | 1.00th=[ 4228], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8717], 00:23:31.755 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10290], 00:23:31.755 | 70.00th=[11338], 80.00th=[13566], 90.00th=[15795], 95.00th=[18220], 00:23:31.755 | 99.00th=[29230], 99.50th=[30540], 99.90th=[32637], 99.95th=[38536], 00:23:31.755 | 99.99th=[38536] 00:23:31.755 write: IOPS=5928, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1004msec); 0 zone resets 00:23:31.755 slat (nsec): min=1406, max=20750k, avg=73831.31, stdev=577724.32 00:23:31.755 clat (usec): min=544, max=55305, avg=10799.91, stdev=6781.52 00:23:31.755 lat (usec): min=644, max=55312, avg=10873.74, stdev=6813.92 00:23:31.755 clat percentiles (usec): 00:23:31.755 | 1.00th=[ 2057], 5.00th=[ 4080], 10.00th=[ 5276], 20.00th=[ 6718], 00:23:31.755 | 30.00th=[ 7832], 40.00th=[ 9110], 50.00th=[10552], 60.00th=[11076], 00:23:31.755 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14746], 95.00th=[17957], 00:23:31.755 | 99.00th=[44827], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:23:31.755 | 99.99th=[55313] 00:23:31.755 bw ( KiB/s): min=22056, max=24544, per=30.90%, avg=23300.00, stdev=1759.28, samples=2 00:23:31.755 iops : min= 5514, max= 6136, avg=5825.00, stdev=439.82, samples=2 00:23:31.755 lat (usec) : 750=0.03%, 1000=0.06% 00:23:31.755 lat (msec) : 2=0.34%, 4=2.30%, 10=45.36%, 20=48.21%, 50=3.20% 00:23:31.755 lat (msec) : 100=0.50% 00:23:31.755 cpu : usr=2.69%, sys=4.99%, ctx=506, majf=0, minf=1 00:23:31.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:31.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.755 issued rwts: total=5632,5952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.755 job3: (groupid=0, jobs=1): err= 0: pid=3155465: Tue Apr 23 16:22:30 2024 00:23:31.755 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:23:31.755 slat (nsec): min=896, max=11590k, avg=96208.22, stdev=703186.39 00:23:31.755 clat (usec): min=3533, max=27295, avg=11932.54, stdev=3595.05 00:23:31.755 lat (usec): min=3537, max=27329, avg=12028.75, stdev=3627.28 00:23:31.755 clat percentiles (usec): 00:23:31.755 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[10028], 00:23:31.755 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:23:31.755 | 70.00th=[11994], 80.00th=[14615], 90.00th=[17171], 95.00th=[19530], 00:23:31.755 | 99.00th=[23462], 99.50th=[25035], 99.90th=[26608], 99.95th=[26608], 00:23:31.755 | 99.99th=[27395] 00:23:31.755 write: IOPS=5902, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1003msec); 0 zone resets 00:23:31.755 slat (nsec): min=1512, max=8739.3k, avg=71742.49, stdev=353927.33 00:23:31.755 clat (usec): min=400, max=23116, avg=10058.49, stdev=3008.03 00:23:31.755 lat (usec): min=1028, max=23119, avg=10130.23, stdev=3025.27 00:23:31.755 clat percentiles (usec): 00:23:31.755 | 1.00th=[ 2966], 5.00th=[ 4490], 10.00th=[ 5342], 20.00th=[ 7046], 00:23:31.755 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[11076], 60.00th=[11338], 00:23:31.755 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13042], 95.00th=[13698], 00:23:31.755 | 99.00th=[15926], 99.50th=[17171], 99.90th=[21890], 99.95th=[22676], 00:23:31.755 | 99.99th=[23200] 00:23:31.755 bw ( KiB/s): min=21760, max=24576, per=30.72%, avg=23168.00, stdev=1991.21, samples=2 00:23:31.755 iops : min= 5440, max= 6144, avg=5792.00, stdev=497.80, samples=2 00:23:31.755 lat (usec) : 500=0.01% 00:23:31.755 lat (msec) : 2=0.07%, 4=1.77%, 10=27.19%, 20=68.90%, 50=2.07% 00:23:31.755 cpu : usr=2.89%, sys=3.99%, ctx=675, majf=0, minf=1 00:23:31.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:23:31.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.755 issued rwts: total=5632,5920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.755 00:23:31.755 Run status group 0 (all jobs): 00:23:31.755 READ: bw=68.6MiB/s (71.9MB/s), 8194KiB/s-21.9MiB/s (8391kB/s-23.0MB/s), io=69.2MiB (72.6MB), run=1003-1010msec 00:23:31.755 WRITE: bw=73.6MiB/s (77.2MB/s), 9.90MiB/s-23.2MiB/s (10.4MB/s-24.3MB/s), io=74.4MiB (78.0MB), run=1003-1010msec 00:23:31.755 00:23:31.755 Disk stats (read/write): 00:23:31.755 nvme0n1: ios=2100/2246, merge=0/0, ticks=20513/38201, in_queue=58714, util=99.40% 00:23:31.755 nvme0n2: ios=3567/3591, merge=0/0, ticks=47187/59843, in_queue=107030, util=99.59% 00:23:31.755 nvme0n3: ios=5028/5120, merge=0/0, ticks=52058/49410, in_queue=101468, util=95.66% 00:23:31.755 nvme0n4: ios=4761/5120, merge=0/0, ticks=55298/48874, in_queue=104172, util=95.53% 00:23:31.755 16:22:30 -- target/fio.sh@55 -- # sync 00:23:31.755 16:22:30 -- target/fio.sh@59 -- # fio_pid=3155643 00:23:31.755 16:22:30 -- target/fio.sh@61 -- # sleep 3 00:23:31.755 16:22:30 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:31.755 [global] 00:23:31.755 thread=1 00:23:31.755 invalidate=1 00:23:31.755 rw=read 00:23:31.755 time_based=1 00:23:31.755 runtime=10 00:23:31.755 ioengine=libaio 00:23:31.755 direct=1 00:23:31.755 bs=4096 00:23:31.755 iodepth=1 00:23:31.755 norandommap=1 00:23:31.755 numjobs=1 00:23:31.755 00:23:31.755 [job0] 00:23:31.755 filename=/dev/nvme0n1 00:23:31.755 [job1] 00:23:31.755 filename=/dev/nvme0n2 00:23:31.755 [job2] 00:23:31.755 filename=/dev/nvme0n3 00:23:31.755 [job3] 00:23:31.755 filename=/dev/nvme0n4 00:23:31.755 Could not set queue depth (nvme0n1) 00:23:31.755 Could not set queue depth (nvme0n2) 00:23:31.755 Could not set queue depth (nvme0n3) 00:23:31.755 Could not set queue depth (nvme0n4) 00:23:32.018 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.018 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.018 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.018 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:32.018 fio-3.35 00:23:32.018 Starting 4 threads 00:23:34.556 16:22:33 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:34.815 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12947456, buflen=4096 00:23:34.815 fio: pid=3155936, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:34.815 16:22:33 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:34.815 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=17551360, buflen=4096 00:23:34.815 fio: pid=3155935, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:34.815 16:22:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:34.815 16:22:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:35.073 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=20566016, buflen=4096 00:23:35.073 fio: pid=3155933, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:35.073 16:22:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.073 16:22:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:35.073 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=311296, buflen=4096 00:23:35.073 fio: pid=3155934, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:35.332 16:22:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.332 16:22:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:35.332 00:23:35.332 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3155933: Tue Apr 23 16:22:34 2024 00:23:35.332 read: IOPS=1749, BW=6995KiB/s (7163kB/s)(19.6MiB/2871msec) 00:23:35.332 slat (usec): min=4, max=16407, avg=13.73, stdev=278.14 00:23:35.332 clat (usec): min=245, max=42002, avg=556.67, stdev=2763.97 00:23:35.332 lat (usec): min=252, max=42035, avg=570.40, stdev=2780.86 00:23:35.332 clat percentiles (usec): 00:23:35.332 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:23:35.332 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 379], 00:23:35.332 | 70.00th=[ 396], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 498], 00:23:35.332 | 99.00th=[ 603], 99.50th=[ 1045], 99.90th=[41681], 99.95th=[42206], 00:23:35.332 | 99.99th=[42206] 00:23:35.332 bw ( KiB/s): min= 96, max=11112, per=38.97%, avg=6414.40, stdev=5010.58, samples=5 00:23:35.332 iops : min= 24, max= 2778, avg=1603.60, stdev=1252.65, samples=5 00:23:35.332 lat (usec) : 250=0.08%, 500=95.44%, 750=3.88%, 1000=0.02% 00:23:35.332 lat (msec) : 2=0.10%, 50=0.46% 00:23:35.332 cpu : usr=0.38%, sys=1.85%, ctx=5025, majf=0, minf=1 00:23:35.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.332 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.332 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.332 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3155934: Tue Apr 23 16:22:34 2024 00:23:35.332 read: IOPS=25, BW=99.7KiB/s (102kB/s)(304KiB/3048msec) 00:23:35.332 slat (usec): min=6, max=115, avg=34.34, stdev=13.61 00:23:35.332 clat (usec): min=920, max=42140, avg=40048.36, stdev=7961.12 00:23:35.332 lat (usec): min=948, max=42170, avg=40082.74, stdev=7961.62 00:23:35.332 clat percentiles (usec): 00:23:35.332 | 1.00th=[ 922], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:23:35.332 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:35.332 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:35.332 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:35.332 | 99.99th=[42206] 00:23:35.332 bw ( KiB/s): min= 96, max= 104, per=0.59%, avg=97.60, stdev= 3.58, samples=5 00:23:35.332 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:23:35.332 lat (usec) : 1000=2.60% 00:23:35.332 lat (msec) : 2=1.30%, 50=94.81% 00:23:35.332 cpu : usr=0.13%, sys=0.00%, ctx=78, majf=0, minf=1 00:23:35.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.332 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.332 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.332 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3155935: Tue Apr 23 16:22:34 2024 00:23:35.332 read: IOPS=1567, BW=6269KiB/s (6420kB/s)(16.7MiB/2734msec) 00:23:35.332 slat (nsec): min=3592, max=73726, avg=8269.45, stdev=4869.42 00:23:35.332 clat (usec): min=233, max=41840, avg=628.37, stdev=3450.85 00:23:35.333 lat (usec): min=239, max=41849, avg=636.64, stdev=3451.55 00:23:35.333 clat percentiles (usec): 00:23:35.333 | 1.00th=[ 260], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:23:35.333 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:23:35.333 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 424], 00:23:35.333 | 99.00th=[ 578], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:35.333 | 99.99th=[41681] 00:23:35.333 bw ( KiB/s): min= 104, max=11856, per=35.17%, avg=5790.40, stdev=5350.18, samples=5 00:23:35.333 iops : min= 26, max= 2964, avg=1447.60, stdev=1337.55, samples=5 00:23:35.333 lat (usec) : 250=0.40%, 500=98.04%, 750=0.70%, 1000=0.09% 00:23:35.333 lat (msec) : 2=0.02%, 50=0.72% 00:23:35.333 cpu : usr=0.48%, sys=1.46%, ctx=4286, majf=0, minf=1 00:23:35.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.333 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.333 issued rwts: total=4286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.333 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3155936: Tue Apr 23 16:22:34 2024 00:23:35.333 read: IOPS=1219, BW=4876KiB/s (4993kB/s)(12.3MiB/2593msec) 00:23:35.333 slat (nsec): min=3897, max=58986, avg=13554.52, stdev=10017.55 00:23:35.333 clat (usec): min=308, max=42138, avg=803.83, stdev=3960.19 00:23:35.333 lat (usec): min=314, max=42148, avg=817.38, stdev=3962.44 00:23:35.333 clat percentiles (usec): 00:23:35.333 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:23:35.333 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 441], 00:23:35.333 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 523], 00:23:35.333 | 99.00th=[ 1074], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:35.333 | 99.99th=[42206] 00:23:35.333 bw ( KiB/s): min= 96, max= 8760, per=28.66%, avg=4718.40, stdev=4334.79, samples=5 00:23:35.333 iops : min= 24, max= 2190, avg=1179.60, stdev=1083.70, samples=5 00:23:35.333 lat (usec) : 500=88.52%, 750=10.28%, 1000=0.13% 00:23:35.333 lat (msec) : 2=0.13%, 50=0.92% 00:23:35.333 cpu : usr=0.66%, sys=1.74%, ctx=3162, majf=0, minf=2 00:23:35.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.333 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.333 issued rwts: total=3162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:35.333 00:23:35.333 Run status group 0 (all jobs): 00:23:35.333 READ: bw=16.1MiB/s (16.9MB/s), 99.7KiB/s-6995KiB/s (102kB/s-7163kB/s), io=49.0MiB (51.4MB), run=2593-3048msec 00:23:35.333 00:23:35.333 Disk stats (read/write): 00:23:35.333 nvme0n1: ios=4997/0, merge=0/0, ticks=2754/0, in_queue=2754, util=94.76% 00:23:35.333 nvme0n2: ios=71/0, merge=0/0, ticks=2837/0, in_queue=2837, util=95.98% 00:23:35.333 nvme0n3: ios=3914/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.28% 00:23:35.333 nvme0n4: ios=2613/0, merge=0/0, ticks=2306/0, in_queue=2306, util=96.10% 00:23:35.333 16:22:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.333 16:22:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:35.592 16:22:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.592 16:22:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:35.592 16:22:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.592 16:22:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:35.850 16:22:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:35.850 16:22:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:36.107 16:22:34 -- target/fio.sh@69 -- # fio_status=0 00:23:36.107 16:22:34 -- target/fio.sh@70 -- # wait 3155643 00:23:36.107 16:22:34 -- target/fio.sh@70 -- # fio_status=4 00:23:36.107 16:22:34 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:36.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:36.364 16:22:35 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:36.364 16:22:35 -- common/autotest_common.sh@1198 -- # local i=0 00:23:36.364 16:22:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:36.364 16:22:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:36.364 16:22:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:36.364 16:22:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:36.364 16:22:35 -- common/autotest_common.sh@1210 -- # return 0 00:23:36.364 16:22:35 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:36.364 16:22:35 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:36.364 nvmf hotplug test: fio failed as expected 00:23:36.365 16:22:35 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.623 16:22:35 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:36.623 16:22:35 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:36.623 16:22:35 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:36.623 16:22:35 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:36.623 16:22:35 -- target/fio.sh@91 -- # nvmftestfini 00:23:36.623 16:22:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:36.623 16:22:35 -- nvmf/common.sh@116 -- # sync 00:23:36.623 16:22:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:36.623 16:22:35 -- nvmf/common.sh@119 -- # set +e 00:23:36.623 16:22:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:36.623 16:22:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:36.623 rmmod nvme_tcp 00:23:36.623 rmmod nvme_fabrics 00:23:36.623 rmmod nvme_keyring 00:23:36.623 16:22:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:36.623 16:22:35 -- nvmf/common.sh@123 -- # set -e 00:23:36.623 16:22:35 -- nvmf/common.sh@124 -- # return 0 00:23:36.623 16:22:35 -- nvmf/common.sh@477 -- # '[' -n 3152315 ']' 00:23:36.623 16:22:35 -- nvmf/common.sh@478 -- # killprocess 3152315 00:23:36.623 16:22:35 -- common/autotest_common.sh@926 -- # '[' -z 3152315 ']' 00:23:36.623 16:22:35 -- common/autotest_common.sh@930 -- # kill -0 3152315 00:23:36.623 16:22:35 -- common/autotest_common.sh@931 -- # uname 00:23:36.623 16:22:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:36.623 16:22:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3152315 00:23:36.623 16:22:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:36.623 16:22:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:36.623 16:22:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3152315' 00:23:36.623 killing process with pid 3152315 00:23:36.623 16:22:35 -- common/autotest_common.sh@945 -- # kill 3152315 00:23:36.623 16:22:35 -- common/autotest_common.sh@950 -- # wait 3152315 00:23:37.190 16:22:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:37.190 16:22:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:37.190 16:22:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:37.190 16:22:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.190 16:22:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:37.190 16:22:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.190 16:22:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.190 16:22:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.093 16:22:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:39.093 00:23:39.093 real 0m27.373s 00:23:39.093 user 2m37.124s 00:23:39.093 sys 0m7.753s 00:23:39.093 16:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.093 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:39.093 ************************************ 00:23:39.093 END TEST nvmf_fio_target 00:23:39.093 ************************************ 00:23:39.353 16:22:38 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:39.353 16:22:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:39.353 16:22:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:39.353 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:39.353 ************************************ 00:23:39.353 START TEST nvmf_bdevio 00:23:39.353 ************************************ 00:23:39.353 16:22:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:39.353 * Looking for test storage... 00:23:39.353 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:39.353 16:22:38 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.353 16:22:38 -- nvmf/common.sh@7 -- # uname -s 00:23:39.353 16:22:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.353 16:22:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.353 16:22:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.353 16:22:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.353 16:22:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.353 16:22:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.353 16:22:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.353 16:22:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.353 16:22:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.353 16:22:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.353 16:22:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:39.353 16:22:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:39.353 16:22:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.353 16:22:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.353 16:22:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:39.353 16:22:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:39.353 16:22:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.353 16:22:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.353 16:22:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.353 16:22:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.353 16:22:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.353 16:22:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.353 16:22:38 -- paths/export.sh@5 -- # export PATH 00:23:39.354 16:22:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.354 16:22:38 -- nvmf/common.sh@46 -- # : 0 00:23:39.354 16:22:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:39.354 16:22:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:39.354 16:22:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:39.354 16:22:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.354 16:22:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.354 16:22:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:39.354 16:22:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:39.354 16:22:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:39.354 16:22:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.354 16:22:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.354 16:22:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:39.354 16:22:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:39.354 16:22:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.354 16:22:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:39.354 16:22:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:39.354 16:22:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:39.354 16:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.354 16:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.354 16:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.354 16:22:38 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:39.354 16:22:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:39.354 16:22:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:39.354 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:23:44.620 16:22:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:44.620 16:22:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:44.620 16:22:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:44.620 16:22:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:44.620 16:22:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:44.620 16:22:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:44.620 16:22:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:44.620 16:22:43 -- nvmf/common.sh@294 -- # net_devs=() 00:23:44.621 16:22:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:44.621 16:22:43 -- nvmf/common.sh@295 -- # e810=() 00:23:44.621 16:22:43 -- nvmf/common.sh@295 -- # local -ga e810 00:23:44.621 16:22:43 -- nvmf/common.sh@296 -- # x722=() 00:23:44.621 16:22:43 -- nvmf/common.sh@296 -- # local -ga x722 00:23:44.621 16:22:43 -- nvmf/common.sh@297 -- # mlx=() 00:23:44.621 16:22:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:44.621 16:22:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.621 16:22:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:44.621 16:22:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.621 16:22:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:44.621 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:44.621 16:22:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:44.621 16:22:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:44.621 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:44.621 16:22:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.621 16:22:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.621 16:22:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.621 16:22:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:44.621 Found net devices under 0000:27:00.0: cvl_0_0 00:23:44.621 16:22:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.621 16:22:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:44.621 16:22:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.621 16:22:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.621 16:22:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:44.621 Found net devices under 0000:27:00.1: cvl_0_1 00:23:44.621 16:22:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.621 16:22:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:44.621 16:22:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:44.621 16:22:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.621 16:22:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.621 16:22:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.621 16:22:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:44.621 16:22:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.621 16:22:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.621 16:22:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:44.621 16:22:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.621 16:22:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.621 16:22:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:44.621 16:22:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:44.621 16:22:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.621 16:22:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.621 16:22:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.621 16:22:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.621 16:22:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:44.621 16:22:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.621 16:22:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.621 16:22:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.621 16:22:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:44.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:23:44.621 00:23:44.621 --- 10.0.0.2 ping statistics --- 00:23:44.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.621 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:23:44.621 16:22:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:23:44.621 00:23:44.621 --- 10.0.0.1 ping statistics --- 00:23:44.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.621 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:44.621 16:22:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.621 16:22:43 -- nvmf/common.sh@410 -- # return 0 00:23:44.621 16:22:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:44.621 16:22:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.621 16:22:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:44.621 16:22:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.621 16:22:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:44.621 16:22:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:44.621 16:22:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:44.621 16:22:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:44.621 16:22:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:44.621 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.621 16:22:43 -- nvmf/common.sh@469 -- # nvmfpid=3160750 00:23:44.621 16:22:43 -- nvmf/common.sh@470 -- # waitforlisten 3160750 00:23:44.621 16:22:43 -- common/autotest_common.sh@819 -- # '[' -z 3160750 ']' 00:23:44.621 16:22:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.621 16:22:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:44.621 16:22:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.621 16:22:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:44.621 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:23:44.621 16:22:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:23:44.621 [2024-04-23 16:22:43.513918] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:44.621 [2024-04-23 16:22:43.514021] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.882 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.882 [2024-04-23 16:22:43.634332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.882 [2024-04-23 16:22:43.728178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.882 [2024-04-23 16:22:43.728345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.882 [2024-04-23 16:22:43.728358] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.882 [2024-04-23 16:22:43.728368] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.882 [2024-04-23 16:22:43.728559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:44.882 [2024-04-23 16:22:43.728720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.882 [2024-04-23 16:22:43.728702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:44.882 [2024-04-23 16:22:43.728752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:45.453 16:22:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:45.453 16:22:44 -- common/autotest_common.sh@852 -- # return 0 00:23:45.453 16:22:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:45.453 16:22:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 16:22:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.453 16:22:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.453 16:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 [2024-04-23 16:22:44.266905] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.453 16:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.453 16:22:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:45.453 16:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 Malloc0 00:23:45.453 16:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.453 16:22:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.453 16:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 16:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.453 16:22:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.453 16:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 16:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.453 16:22:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.453 16:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.453 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:23:45.453 [2024-04-23 16:22:44.332897] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.453 16:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.453 16:22:44 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:23:45.453 16:22:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:45.453 16:22:44 -- nvmf/common.sh@520 -- # config=() 00:23:45.453 16:22:44 -- nvmf/common.sh@520 -- # local subsystem config 00:23:45.453 16:22:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:45.453 16:22:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:45.453 { 00:23:45.453 "params": { 00:23:45.453 "name": "Nvme$subsystem", 00:23:45.453 "trtype": "$TEST_TRANSPORT", 00:23:45.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.453 "adrfam": "ipv4", 00:23:45.453 "trsvcid": "$NVMF_PORT", 00:23:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.453 "hdgst": ${hdgst:-false}, 00:23:45.453 "ddgst": ${ddgst:-false} 00:23:45.453 }, 00:23:45.453 "method": "bdev_nvme_attach_controller" 00:23:45.453 } 00:23:45.453 EOF 00:23:45.453 )") 00:23:45.453 16:22:44 -- nvmf/common.sh@542 -- # cat 00:23:45.453 16:22:44 -- nvmf/common.sh@544 -- # jq . 00:23:45.453 16:22:44 -- nvmf/common.sh@545 -- # IFS=, 00:23:45.453 16:22:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:45.453 "params": { 00:23:45.453 "name": "Nvme1", 00:23:45.453 "trtype": "tcp", 00:23:45.453 "traddr": "10.0.0.2", 00:23:45.453 "adrfam": "ipv4", 00:23:45.453 "trsvcid": "4420", 00:23:45.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.453 "hdgst": false, 00:23:45.453 "ddgst": false 00:23:45.453 }, 00:23:45.453 "method": "bdev_nvme_attach_controller" 00:23:45.453 }' 00:23:45.712 [2024-04-23 16:22:44.419555] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:45.712 [2024-04-23 16:22:44.419692] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161064 ] 00:23:45.712 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.712 [2024-04-23 16:22:44.549234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:45.712 [2024-04-23 16:22:44.641413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.712 [2024-04-23 16:22:44.641513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.712 [2024-04-23 16:22:44.641519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.971 [2024-04-23 16:22:44.894681] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:45.971 [2024-04-23 16:22:44.894718] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:45.971 I/O targets: 00:23:45.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:45.971 00:23:45.971 00:23:45.971 CUnit - A unit testing framework for C - Version 2.1-3 00:23:45.971 http://cunit.sourceforge.net/ 00:23:45.971 00:23:45.971 00:23:45.971 Suite: bdevio tests on: Nvme1n1 00:23:46.229 Test: blockdev write read block ...passed 00:23:46.229 Test: blockdev write zeroes read block ...passed 00:23:46.229 Test: blockdev write zeroes read no split ...passed 00:23:46.229 Test: blockdev write zeroes read split ...passed 00:23:46.229 Test: blockdev write zeroes read split partial ...passed 00:23:46.229 Test: blockdev reset ...[2024-04-23 16:22:45.132895] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.229 [2024-04-23 16:22:45.132987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:23:46.487 [2024-04-23 16:22:45.244473] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:46.487 passed 00:23:46.487 Test: blockdev write read 8 blocks ...passed 00:23:46.487 Test: blockdev write read size > 128k ...passed 00:23:46.487 Test: blockdev write read invalid size ...passed 00:23:46.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:46.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:46.487 Test: blockdev write read max offset ...passed 00:23:46.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:46.487 Test: blockdev writev readv 8 blocks ...passed 00:23:46.487 Test: blockdev writev readv 30 x 1block ...passed 00:23:46.747 Test: blockdev writev readv block ...passed 00:23:46.747 Test: blockdev writev readv size > 128k ...passed 00:23:46.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:46.747 Test: blockdev comparev and writev ...[2024-04-23 16:22:45.467245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.467286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.467304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.467314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.467719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.467730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.467743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.467751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.468144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.468166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.468175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.468565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.468577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:46.747 [2024-04-23 16:22:45.468591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:46.747 [2024-04-23 16:22:45.468601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:46.747 passed 00:23:46.747 Test: blockdev nvme passthru rw ...passed 00:23:46.747 Test: blockdev nvme passthru vendor specific ...[2024-04-23 16:22:45.554409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.748 [2024-04-23 16:22:45.554434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:46.748 [2024-04-23 16:22:45.554742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.748 [2024-04-23 16:22:45.554752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:46.748 [2024-04-23 16:22:45.555041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.748 [2024-04-23 16:22:45.555050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:46.748 [2024-04-23 16:22:45.555326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.748 [2024-04-23 16:22:45.555336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:46.748 passed 00:23:46.748 Test: blockdev nvme admin passthru ...passed 00:23:46.748 Test: blockdev copy ...passed 00:23:46.748 00:23:46.748 Run Summary: Type Total Ran Passed Failed Inactive 00:23:46.748 suites 1 1 n/a 0 0 00:23:46.748 tests 23 23 23 0 0 00:23:46.748 asserts 152 152 152 0 n/a 00:23:46.748 00:23:46.748 Elapsed time = 1.455 seconds 00:23:47.319 16:22:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.319 16:22:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:47.319 16:22:45 -- common/autotest_common.sh@10 -- # set +x 00:23:47.319 16:22:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:47.319 16:22:46 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:47.319 16:22:46 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:47.319 16:22:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:47.319 16:22:46 -- nvmf/common.sh@116 -- # sync 00:23:47.319 16:22:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:47.319 16:22:46 -- nvmf/common.sh@119 -- # set +e 00:23:47.319 16:22:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:47.319 16:22:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:47.319 rmmod nvme_tcp 00:23:47.319 rmmod nvme_fabrics 00:23:47.319 rmmod nvme_keyring 00:23:47.319 16:22:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:47.319 16:22:46 -- nvmf/common.sh@123 -- # set -e 00:23:47.319 16:22:46 -- nvmf/common.sh@124 -- # return 0 00:23:47.319 16:22:46 -- nvmf/common.sh@477 -- # '[' -n 3160750 ']' 00:23:47.319 16:22:46 -- nvmf/common.sh@478 -- # killprocess 3160750 00:23:47.319 16:22:46 -- common/autotest_common.sh@926 -- # '[' -z 3160750 ']' 00:23:47.319 16:22:46 -- common/autotest_common.sh@930 -- # kill -0 3160750 00:23:47.319 16:22:46 -- common/autotest_common.sh@931 -- # uname 00:23:47.319 16:22:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:47.319 16:22:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3160750 00:23:47.319 16:22:46 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:23:47.319 16:22:46 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:23:47.319 16:22:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3160750' 00:23:47.319 killing process with pid 3160750 00:23:47.319 16:22:46 -- common/autotest_common.sh@945 -- # kill 3160750 00:23:47.319 16:22:46 -- common/autotest_common.sh@950 -- # wait 3160750 00:23:47.888 16:22:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:47.888 16:22:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:47.888 16:22:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:47.888 16:22:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.888 16:22:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:47.888 16:22:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.888 16:22:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.888 16:22:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.421 16:22:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:50.421 00:23:50.421 real 0m10.715s 00:23:50.421 user 0m15.730s 00:23:50.421 sys 0m4.553s 00:23:50.421 16:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.421 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:23:50.421 ************************************ 00:23:50.421 END TEST nvmf_bdevio 00:23:50.421 ************************************ 00:23:50.421 16:22:48 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:23:50.421 16:22:48 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:50.421 16:22:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:50.421 16:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:50.421 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:23:50.421 ************************************ 00:23:50.421 START TEST nvmf_bdevio_no_huge 00:23:50.421 ************************************ 00:23:50.421 16:22:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:50.421 * Looking for test storage... 00:23:50.421 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:50.421 16:22:48 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.421 16:22:48 -- nvmf/common.sh@7 -- # uname -s 00:23:50.421 16:22:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.421 16:22:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.421 16:22:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.421 16:22:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.421 16:22:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.421 16:22:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.421 16:22:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.421 16:22:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.421 16:22:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.421 16:22:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.421 16:22:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:50.421 16:22:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:50.421 16:22:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.421 16:22:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.421 16:22:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:50.422 16:22:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:50.422 16:22:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.422 16:22:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.422 16:22:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.422 16:22:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.422 16:22:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.422 16:22:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.422 16:22:48 -- paths/export.sh@5 -- # export PATH 00:23:50.422 16:22:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.422 16:22:48 -- nvmf/common.sh@46 -- # : 0 00:23:50.422 16:22:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:50.422 16:22:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:50.422 16:22:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:50.422 16:22:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.422 16:22:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.422 16:22:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:50.422 16:22:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:50.422 16:22:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:50.422 16:22:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:50.422 16:22:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:50.422 16:22:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:23:50.422 16:22:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:50.422 16:22:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.422 16:22:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:50.422 16:22:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:50.422 16:22:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:50.422 16:22:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.422 16:22:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.422 16:22:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.422 16:22:48 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:23:50.422 16:22:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:50.422 16:22:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:50.422 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:23:55.700 16:22:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:55.700 16:22:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:55.700 16:22:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:55.700 16:22:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:55.700 16:22:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:55.700 16:22:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:55.700 16:22:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:55.700 16:22:53 -- nvmf/common.sh@294 -- # net_devs=() 00:23:55.700 16:22:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:55.700 16:22:53 -- nvmf/common.sh@295 -- # e810=() 00:23:55.700 16:22:53 -- nvmf/common.sh@295 -- # local -ga e810 00:23:55.700 16:22:53 -- nvmf/common.sh@296 -- # x722=() 00:23:55.700 16:22:53 -- nvmf/common.sh@296 -- # local -ga x722 00:23:55.700 16:22:53 -- nvmf/common.sh@297 -- # mlx=() 00:23:55.700 16:22:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:55.700 16:22:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.700 16:22:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:55.700 16:22:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:55.700 16:22:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.700 16:22:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:55.700 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:55.700 16:22:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:55.700 16:22:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:55.700 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:55.700 16:22:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.700 16:22:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.701 16:22:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:55.701 16:22:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:55.701 16:22:53 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:23:55.701 16:22:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.701 16:22:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.701 16:22:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.701 16:22:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.701 16:22:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:55.701 Found net devices under 0000:27:00.0: cvl_0_0 00:23:55.701 16:22:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.701 16:22:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:55.701 16:22:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.701 16:22:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:55.701 16:22:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.701 16:22:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:55.701 Found net devices under 0000:27:00.1: cvl_0_1 00:23:55.701 16:22:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.701 16:22:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:55.701 16:22:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:55.701 16:22:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:55.701 16:22:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:55.701 16:22:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:55.701 16:22:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.701 16:22:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.701 16:22:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.701 16:22:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:55.701 16:22:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.701 16:22:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.701 16:22:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:55.701 16:22:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.701 16:22:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.701 16:22:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:55.701 16:22:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:55.701 16:22:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.701 16:22:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.701 16:22:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.701 16:22:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.701 16:22:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:55.701 16:22:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.701 16:22:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.701 16:22:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.701 16:22:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:55.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:55.701 00:23:55.701 --- 10.0.0.2 ping statistics --- 00:23:55.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.701 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:55.701 16:22:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:23:55.701 00:23:55.701 --- 10.0.0.1 ping statistics --- 00:23:55.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.701 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:23:55.701 16:22:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.701 16:22:54 -- nvmf/common.sh@410 -- # return 0 00:23:55.701 16:22:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:55.701 16:22:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.701 16:22:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:55.701 16:22:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:55.701 16:22:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.701 16:22:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:55.701 16:22:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:55.701 16:22:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:55.701 16:22:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:55.701 16:22:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:55.701 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.701 16:22:54 -- nvmf/common.sh@469 -- # nvmfpid=3165282 00:23:55.701 16:22:54 -- nvmf/common.sh@470 -- # waitforlisten 3165282 00:23:55.701 16:22:54 -- common/autotest_common.sh@819 -- # '[' -z 3165282 ']' 00:23:55.701 16:22:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.701 16:22:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:55.701 16:22:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:55.701 16:22:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.701 16:22:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:55.701 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:23:55.701 [2024-04-23 16:22:54.310569] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:55.701 [2024-04-23 16:22:54.310689] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:55.701 [2024-04-23 16:22:54.457947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.701 [2024-04-23 16:22:54.579414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:55.701 [2024-04-23 16:22:54.579574] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.701 [2024-04-23 16:22:54.579587] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.701 [2024-04-23 16:22:54.579597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.701 [2024-04-23 16:22:54.579798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.701 [2024-04-23 16:22:54.579926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:55.701 [2024-04-23 16:22:54.580028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.701 [2024-04-23 16:22:54.580057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:56.271 16:22:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:56.271 16:22:55 -- common/autotest_common.sh@852 -- # return 0 00:23:56.271 16:22:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:56.271 16:22:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 16:22:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.271 16:22:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.271 16:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 [2024-04-23 16:22:55.039393] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.271 16:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.271 16:22:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:56.271 16:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 Malloc0 00:23:56.271 16:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.271 16:22:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.271 16:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 16:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.271 16:22:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.271 16:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 16:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.271 16:22:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.271 16:22:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:56.271 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:23:56.271 [2024-04-23 16:22:55.092941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.271 16:22:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:56.271 16:22:55 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:56.271 16:22:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:56.271 16:22:55 -- nvmf/common.sh@520 -- # config=() 00:23:56.271 16:22:55 -- nvmf/common.sh@520 -- # local subsystem config 00:23:56.271 16:22:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:56.271 16:22:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:56.271 { 00:23:56.271 "params": { 00:23:56.271 "name": "Nvme$subsystem", 00:23:56.271 "trtype": "$TEST_TRANSPORT", 00:23:56.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.271 "adrfam": "ipv4", 00:23:56.271 "trsvcid": "$NVMF_PORT", 00:23:56.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.271 "hdgst": ${hdgst:-false}, 00:23:56.271 "ddgst": ${ddgst:-false} 00:23:56.271 }, 00:23:56.271 "method": "bdev_nvme_attach_controller" 00:23:56.271 } 00:23:56.271 EOF 00:23:56.271 )") 00:23:56.271 16:22:55 -- nvmf/common.sh@542 -- # cat 00:23:56.271 16:22:55 -- nvmf/common.sh@544 -- # jq . 00:23:56.271 16:22:55 -- nvmf/common.sh@545 -- # IFS=, 00:23:56.271 16:22:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:56.271 "params": { 00:23:56.271 "name": "Nvme1", 00:23:56.271 "trtype": "tcp", 00:23:56.271 "traddr": "10.0.0.2", 00:23:56.271 "adrfam": "ipv4", 00:23:56.271 "trsvcid": "4420", 00:23:56.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.271 "hdgst": false, 00:23:56.271 "ddgst": false 00:23:56.271 }, 00:23:56.271 "method": "bdev_nvme_attach_controller" 00:23:56.271 }' 00:23:56.271 [2024-04-23 16:22:55.164133] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:23:56.271 [2024-04-23 16:22:55.164243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3165583 ] 00:23:56.533 [2024-04-23 16:22:55.301117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:56.533 [2024-04-23 16:22:55.419286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.533 [2024-04-23 16:22:55.419388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.533 [2024-04-23 16:22:55.419394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.793 [2024-04-23 16:22:55.675793] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:56.793 [2024-04-23 16:22:55.675841] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:56.793 I/O targets: 00:23:56.793 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:56.793 00:23:56.793 00:23:56.793 CUnit - A unit testing framework for C - Version 2.1-3 00:23:56.793 http://cunit.sourceforge.net/ 00:23:56.793 00:23:56.793 00:23:56.793 Suite: bdevio tests on: Nvme1n1 00:23:57.053 Test: blockdev write read block ...passed 00:23:57.053 Test: blockdev write zeroes read block ...passed 00:23:57.053 Test: blockdev write zeroes read no split ...passed 00:23:57.053 Test: blockdev write zeroes read split ...passed 00:23:57.053 Test: blockdev write zeroes read split partial ...passed 00:23:57.053 Test: blockdev reset ...[2024-04-23 16:22:55.901243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.053 [2024-04-23 16:22:55.901351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000002f80 (9): Bad file descriptor 00:23:57.053 [2024-04-23 16:22:55.964473] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.053 passed 00:23:57.312 Test: blockdev write read 8 blocks ...passed 00:23:57.312 Test: blockdev write read size > 128k ...passed 00:23:57.312 Test: blockdev write read invalid size ...passed 00:23:57.312 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:57.312 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:57.312 Test: blockdev write read max offset ...passed 00:23:57.312 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:57.312 Test: blockdev writev readv 8 blocks ...passed 00:23:57.312 Test: blockdev writev readv 30 x 1block ...passed 00:23:57.312 Test: blockdev writev readv block ...passed 00:23:57.312 Test: blockdev writev readv size > 128k ...passed 00:23:57.312 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:57.312 Test: blockdev comparev and writev ...[2024-04-23 16:22:56.188789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.188851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.188861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.189239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.189250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.189264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.189274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.189678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.189691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.190087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.190101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:57.312 [2024-04-23 16:22:56.190114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:57.312 [2024-04-23 16:22:56.190125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:57.312 passed 00:23:57.571 Test: blockdev nvme passthru rw ...passed 00:23:57.571 Test: blockdev nvme passthru vendor specific ...[2024-04-23 16:22:56.274400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.571 [2024-04-23 16:22:56.274424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:57.571 [2024-04-23 16:22:56.274714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.571 [2024-04-23 16:22:56.274730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:57.571 [2024-04-23 16:22:56.275005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.571 [2024-04-23 16:22:56.275019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:57.571 [2024-04-23 16:22:56.275288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:57.571 [2024-04-23 16:22:56.275300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:57.571 passed 00:23:57.571 Test: blockdev nvme admin passthru ...passed 00:23:57.571 Test: blockdev copy ...passed 00:23:57.571 00:23:57.572 Run Summary: Type Total Ran Passed Failed Inactive 00:23:57.572 suites 1 1 n/a 0 0 00:23:57.572 tests 23 23 23 0 0 00:23:57.572 asserts 152 152 152 0 n/a 00:23:57.572 00:23:57.572 Elapsed time = 1.319 seconds 00:23:57.831 16:22:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.831 16:22:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.831 16:22:56 -- common/autotest_common.sh@10 -- # set +x 00:23:57.831 16:22:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.831 16:22:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:57.831 16:22:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:23:57.831 16:22:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:57.831 16:22:56 -- nvmf/common.sh@116 -- # sync 00:23:57.831 16:22:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:57.831 16:22:56 -- nvmf/common.sh@119 -- # set +e 00:23:57.831 16:22:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:57.831 16:22:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:57.831 rmmod nvme_tcp 00:23:57.831 rmmod nvme_fabrics 00:23:57.831 rmmod nvme_keyring 00:23:57.831 16:22:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:57.831 16:22:56 -- nvmf/common.sh@123 -- # set -e 00:23:57.831 16:22:56 -- nvmf/common.sh@124 -- # return 0 00:23:57.831 16:22:56 -- nvmf/common.sh@477 -- # '[' -n 3165282 ']' 00:23:57.831 16:22:56 -- nvmf/common.sh@478 -- # killprocess 3165282 00:23:57.831 16:22:56 -- common/autotest_common.sh@926 -- # '[' -z 3165282 ']' 00:23:57.831 16:22:56 -- common/autotest_common.sh@930 -- # kill -0 3165282 00:23:57.831 16:22:56 -- common/autotest_common.sh@931 -- # uname 00:23:57.831 16:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:57.831 16:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3165282 00:23:58.089 16:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:23:58.089 16:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:23:58.089 16:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3165282' 00:23:58.089 killing process with pid 3165282 00:23:58.089 16:22:56 -- common/autotest_common.sh@945 -- # kill 3165282 00:23:58.089 16:22:56 -- common/autotest_common.sh@950 -- # wait 3165282 00:23:58.349 16:22:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:58.349 16:22:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:58.349 16:22:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:58.349 16:22:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.350 16:22:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:58.350 16:22:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.350 16:22:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.350 16:22:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.890 16:22:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:00.890 00:24:00.890 real 0m10.429s 00:24:00.890 user 0m14.185s 00:24:00.890 sys 0m4.887s 00:24:00.890 16:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.890 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 ************************************ 00:24:00.890 END TEST nvmf_bdevio_no_huge 00:24:00.890 ************************************ 00:24:00.890 16:22:59 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:00.890 16:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:00.890 16:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:00.890 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 ************************************ 00:24:00.890 START TEST nvmf_tls 00:24:00.890 ************************************ 00:24:00.890 16:22:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:00.890 * Looking for test storage... 00:24:00.890 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:00.890 16:22:59 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.890 16:22:59 -- nvmf/common.sh@7 -- # uname -s 00:24:00.890 16:22:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.890 16:22:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.890 16:22:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.890 16:22:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.890 16:22:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.890 16:22:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.890 16:22:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.890 16:22:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.890 16:22:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.891 16:22:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.891 16:22:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:00.891 16:22:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:00.891 16:22:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.891 16:22:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.891 16:22:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:00.891 16:22:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:00.891 16:22:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.891 16:22:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.891 16:22:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.891 16:22:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.891 16:22:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.891 16:22:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.891 16:22:59 -- paths/export.sh@5 -- # export PATH 00:24:00.891 16:22:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.891 16:22:59 -- nvmf/common.sh@46 -- # : 0 00:24:00.891 16:22:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:00.891 16:22:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:00.891 16:22:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:00.891 16:22:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.891 16:22:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.891 16:22:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:00.891 16:22:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:00.891 16:22:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:00.891 16:22:59 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:00.891 16:22:59 -- target/tls.sh@71 -- # nvmftestinit 00:24:00.891 16:22:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:00.891 16:22:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.891 16:22:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:00.891 16:22:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:00.891 16:22:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:00.891 16:22:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.891 16:22:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.891 16:22:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.891 16:22:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:24:00.891 16:22:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:00.891 16:22:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:00.891 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.173 16:23:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.173 16:23:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:06.173 16:23:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:06.173 16:23:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:06.173 16:23:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:06.173 16:23:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:06.173 16:23:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:06.173 16:23:04 -- nvmf/common.sh@294 -- # net_devs=() 00:24:06.173 16:23:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:06.173 16:23:04 -- nvmf/common.sh@295 -- # e810=() 00:24:06.173 16:23:04 -- nvmf/common.sh@295 -- # local -ga e810 00:24:06.173 16:23:04 -- nvmf/common.sh@296 -- # x722=() 00:24:06.173 16:23:04 -- nvmf/common.sh@296 -- # local -ga x722 00:24:06.173 16:23:04 -- nvmf/common.sh@297 -- # mlx=() 00:24:06.173 16:23:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:06.173 16:23:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.173 16:23:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:06.173 16:23:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.173 16:23:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:06.173 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:06.173 16:23:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.173 16:23:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:06.173 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:06.173 16:23:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.173 16:23:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.173 16:23:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.173 16:23:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:06.173 Found net devices under 0000:27:00.0: cvl_0_0 00:24:06.173 16:23:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.173 16:23:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.173 16:23:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.173 16:23:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.173 16:23:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:06.173 Found net devices under 0000:27:00.1: cvl_0_1 00:24:06.173 16:23:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.173 16:23:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:06.173 16:23:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:06.173 16:23:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.173 16:23:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.173 16:23:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.173 16:23:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:06.173 16:23:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.173 16:23:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.173 16:23:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:06.173 16:23:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.173 16:23:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.173 16:23:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:06.173 16:23:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:06.173 16:23:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.173 16:23:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.173 16:23:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.173 16:23:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.173 16:23:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:06.173 16:23:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.173 16:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.173 16:23:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.173 16:23:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:06.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:24:06.173 00:24:06.173 --- 10.0.0.2 ping statistics --- 00:24:06.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.173 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:06.173 16:23:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:24:06.173 00:24:06.173 --- 10.0.0.1 ping statistics --- 00:24:06.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.173 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:06.173 16:23:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.173 16:23:04 -- nvmf/common.sh@410 -- # return 0 00:24:06.173 16:23:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:06.173 16:23:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.173 16:23:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:06.173 16:23:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.173 16:23:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:06.173 16:23:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:06.173 16:23:04 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:06.173 16:23:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:06.173 16:23:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:06.173 16:23:04 -- common/autotest_common.sh@10 -- # set +x 00:24:06.173 16:23:04 -- nvmf/common.sh@469 -- # nvmfpid=3169783 00:24:06.173 16:23:04 -- nvmf/common.sh@470 -- # waitforlisten 3169783 00:24:06.173 16:23:04 -- common/autotest_common.sh@819 -- # '[' -z 3169783 ']' 00:24:06.173 16:23:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.173 16:23:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:06.173 16:23:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.173 16:23:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:06.173 16:23:04 -- common/autotest_common.sh@10 -- # set +x 00:24:06.173 16:23:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:06.173 [2024-04-23 16:23:04.785865] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:06.174 [2024-04-23 16:23:04.785973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.174 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.174 [2024-04-23 16:23:04.906046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.174 [2024-04-23 16:23:05.002747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:06.174 [2024-04-23 16:23:05.002913] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.174 [2024-04-23 16:23:05.002926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.174 [2024-04-23 16:23:05.002935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.174 [2024-04-23 16:23:05.002964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.743 16:23:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:06.743 16:23:05 -- common/autotest_common.sh@852 -- # return 0 00:24:06.743 16:23:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.743 16:23:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:06.743 16:23:05 -- common/autotest_common.sh@10 -- # set +x 00:24:06.743 16:23:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.743 16:23:05 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:24:06.743 16:23:05 -- target/tls.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:06.743 true 00:24:06.743 16:23:05 -- target/tls.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:06.743 16:23:05 -- target/tls.sh@82 -- # jq -r .tls_version 00:24:07.005 16:23:05 -- target/tls.sh@82 -- # version=0 00:24:07.005 16:23:05 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:24:07.005 16:23:05 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:07.005 16:23:05 -- target/tls.sh@90 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:07.005 16:23:05 -- target/tls.sh@90 -- # jq -r .tls_version 00:24:07.266 16:23:06 -- target/tls.sh@90 -- # version=13 00:24:07.266 16:23:06 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:24:07.266 16:23:06 -- target/tls.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:07.266 16:23:06 -- target/tls.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:07.266 16:23:06 -- target/tls.sh@98 -- # jq -r .tls_version 00:24:07.524 16:23:06 -- target/tls.sh@98 -- # version=7 00:24:07.524 16:23:06 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:24:07.524 16:23:06 -- target/tls.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:07.524 16:23:06 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:07.783 16:23:06 -- target/tls.sh@105 -- # ktls=false 00:24:07.783 16:23:06 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:24:07.783 16:23:06 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:07.783 16:23:06 -- target/tls.sh@113 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:07.783 16:23:06 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:08.042 16:23:06 -- target/tls.sh@113 -- # ktls=true 00:24:08.042 16:23:06 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:24:08.042 16:23:06 -- target/tls.sh@120 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:08.042 16:23:06 -- target/tls.sh@121 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:08.042 16:23:06 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:24:08.302 16:23:06 -- target/tls.sh@121 -- # ktls=false 00:24:08.302 16:23:06 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:24:08.302 16:23:06 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:24:08.302 16:23:06 -- target/tls.sh@49 -- # local key hash crc 00:24:08.302 16:23:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:24:08.302 16:23:06 -- target/tls.sh@51 -- # hash=01 00:24:08.302 16:23:06 -- target/tls.sh@52 -- # gzip -1 -c 00:24:08.302 16:23:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:24:08.302 16:23:06 -- target/tls.sh@52 -- # tail -c8 00:24:08.302 16:23:06 -- target/tls.sh@52 -- # head -c 4 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # crc='p$H�' 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:08.302 16:23:07 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:08.302 16:23:07 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:24:08.302 16:23:07 -- target/tls.sh@49 -- # local key hash crc 00:24:08.302 16:23:07 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:24:08.302 16:23:07 -- target/tls.sh@51 -- # hash=01 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # gzip -1 -c 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # tail -c8 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # head -c 4 00:24:08.302 16:23:07 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:08.302 16:23:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:08.302 16:23:07 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:08.302 16:23:07 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:08.302 16:23:07 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:08.302 16:23:07 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:08.302 16:23:07 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:08.302 16:23:07 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:08.302 16:23:07 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:08.302 16:23:07 -- target/tls.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:08.302 16:23:07 -- target/tls.sh@140 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:08.563 16:23:07 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:08.563 16:23:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:08.563 16:23:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.824 [2024-04-23 16:23:07.557425] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.824 16:23:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.824 16:23:07 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.084 [2024-04-23 16:23:07.849477] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.084 [2024-04-23 16:23:07.849747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.084 16:23:07 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:09.343 malloc0 00:24:09.343 16:23:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.343 16:23:08 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:09.602 16:23:08 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:09.602 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.639 Initializing NVMe Controllers 00:24:19.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.639 Initialization complete. Launching workers. 00:24:19.639 ======================================================== 00:24:19.639 Latency(us) 00:24:19.639 Device Information : IOPS MiB/s Average min max 00:24:19.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17629.05 68.86 3630.65 1157.10 7191.36 00:24:19.639 ======================================================== 00:24:19.639 Total : 17629.05 68.86 3630.65 1157.10 7191.36 00:24:19.639 00:24:19.639 16:23:18 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:19.639 16:23:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.639 16:23:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.639 16:23:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.639 16:23:18 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:24:19.639 16:23:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.639 16:23:18 -- target/tls.sh@28 -- # bdevperf_pid=3172535 00:24:19.639 16:23:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.639 16:23:18 -- target/tls.sh@31 -- # waitforlisten 3172535 /var/tmp/bdevperf.sock 00:24:19.639 16:23:18 -- common/autotest_common.sh@819 -- # '[' -z 3172535 ']' 00:24:19.639 16:23:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.639 16:23:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.639 16:23:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.639 16:23:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.639 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:24:19.639 16:23:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.898 [2024-04-23 16:23:18.574790] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:19.899 [2024-04-23 16:23:18.574910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172535 ] 00:24:19.899 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.899 [2024-04-23 16:23:18.688809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.899 [2024-04-23 16:23:18.783932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.468 16:23:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:20.468 16:23:19 -- common/autotest_common.sh@852 -- # return 0 00:24:20.468 16:23:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:20.468 [2024-04-23 16:23:19.383970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.730 TLSTESTn1 00:24:20.730 16:23:19 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.730 Running I/O for 10 seconds... 00:24:30.714 00:24:30.714 Latency(us) 00:24:30.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.714 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.714 Verification LBA range: start 0x0 length 0x2000 00:24:30.714 TLSTESTn1 : 10.02 3612.03 14.11 0.00 0.00 35399.55 4259.84 72296.56 00:24:30.714 =================================================================================================================== 00:24:30.714 Total : 3612.03 14.11 0.00 0.00 35399.55 4259.84 72296.56 00:24:30.714 0 00:24:30.714 16:23:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.714 16:23:29 -- target/tls.sh@45 -- # killprocess 3172535 00:24:30.714 16:23:29 -- common/autotest_common.sh@926 -- # '[' -z 3172535 ']' 00:24:30.714 16:23:29 -- common/autotest_common.sh@930 -- # kill -0 3172535 00:24:30.714 16:23:29 -- common/autotest_common.sh@931 -- # uname 00:24:30.714 16:23:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.714 16:23:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3172535 00:24:30.714 16:23:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:30.714 16:23:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:30.714 16:23:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3172535' 00:24:30.714 killing process with pid 3172535 00:24:30.714 16:23:29 -- common/autotest_common.sh@945 -- # kill 3172535 00:24:30.714 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.714 00:24:30.714 Latency(us) 00:24:30.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.714 =================================================================================================================== 00:24:30.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.714 16:23:29 -- common/autotest_common.sh@950 -- # wait 3172535 00:24:31.283 16:23:30 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:31.283 16:23:30 -- common/autotest_common.sh@640 -- # local es=0 00:24:31.283 16:23:30 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:31.283 16:23:30 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:24:31.283 16:23:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:31.283 16:23:30 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:24:31.283 16:23:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:31.283 16:23:30 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:31.283 16:23:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.283 16:23:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.283 16:23:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.283 16:23:30 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:24:31.283 16:23:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.283 16:23:30 -- target/tls.sh@28 -- # bdevperf_pid=3174918 00:24:31.283 16:23:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.283 16:23:30 -- target/tls.sh@31 -- # waitforlisten 3174918 /var/tmp/bdevperf.sock 00:24:31.283 16:23:30 -- common/autotest_common.sh@819 -- # '[' -z 3174918 ']' 00:24:31.283 16:23:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.283 16:23:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.283 16:23:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.283 16:23:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.283 16:23:30 -- common/autotest_common.sh@10 -- # set +x 00:24:31.283 16:23:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.283 [2024-04-23 16:23:30.092467] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:31.283 [2024-04-23 16:23:30.092587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174918 ] 00:24:31.283 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.283 [2024-04-23 16:23:30.207143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.541 [2024-04-23 16:23:30.302314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.111 16:23:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:32.111 16:23:30 -- common/autotest_common.sh@852 -- # return 0 00:24:32.111 16:23:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt 00:24:32.111 [2024-04-23 16:23:30.902130] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.112 [2024-04-23 16:23:30.909844] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.112 [2024-04-23 16:23:30.910029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:24:32.112 [2024-04-23 16:23:30.911008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:24:32.112 [2024-04-23 16:23:30.912002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.112 [2024-04-23 16:23:30.912020] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:32.112 [2024-04-23 16:23:30.912034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.112 request: 00:24:32.112 { 00:24:32.112 "name": "TLSTEST", 00:24:32.112 "trtype": "tcp", 00:24:32.112 "traddr": "10.0.0.2", 00:24:32.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.112 "adrfam": "ipv4", 00:24:32.112 "trsvcid": "4420", 00:24:32.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.112 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:24:32.112 "method": "bdev_nvme_attach_controller", 00:24:32.112 "req_id": 1 00:24:32.112 } 00:24:32.112 Got JSON-RPC error response 00:24:32.112 response: 00:24:32.112 { 00:24:32.112 "code": -32602, 00:24:32.112 "message": "Invalid parameters" 00:24:32.112 } 00:24:32.112 16:23:30 -- target/tls.sh@36 -- # killprocess 3174918 00:24:32.112 16:23:30 -- common/autotest_common.sh@926 -- # '[' -z 3174918 ']' 00:24:32.112 16:23:30 -- common/autotest_common.sh@930 -- # kill -0 3174918 00:24:32.112 16:23:30 -- common/autotest_common.sh@931 -- # uname 00:24:32.112 16:23:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:32.112 16:23:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3174918 00:24:32.112 16:23:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:32.112 16:23:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:32.112 16:23:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3174918' 00:24:32.112 killing process with pid 3174918 00:24:32.112 16:23:30 -- common/autotest_common.sh@945 -- # kill 3174918 00:24:32.112 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.112 00:24:32.112 Latency(us) 00:24:32.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.112 =================================================================================================================== 00:24:32.112 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.112 16:23:30 -- common/autotest_common.sh@950 -- # wait 3174918 00:24:32.683 16:23:31 -- target/tls.sh@37 -- # return 1 00:24:32.683 16:23:31 -- common/autotest_common.sh@643 -- # es=1 00:24:32.683 16:23:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:32.683 16:23:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:32.683 16:23:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:32.683 16:23:31 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:32.683 16:23:31 -- common/autotest_common.sh@640 -- # local es=0 00:24:32.683 16:23:31 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:32.683 16:23:31 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:24:32.683 16:23:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:32.683 16:23:31 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:24:32.683 16:23:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:32.683 16:23:31 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:32.683 16:23:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.683 16:23:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:32.683 16:23:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:32.683 16:23:31 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:24:32.683 16:23:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.683 16:23:31 -- target/tls.sh@28 -- # bdevperf_pid=3175104 00:24:32.683 16:23:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.683 16:23:31 -- target/tls.sh@31 -- # waitforlisten 3175104 /var/tmp/bdevperf.sock 00:24:32.683 16:23:31 -- common/autotest_common.sh@819 -- # '[' -z 3175104 ']' 00:24:32.683 16:23:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.683 16:23:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:32.683 16:23:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.683 16:23:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:32.683 16:23:31 -- common/autotest_common.sh@10 -- # set +x 00:24:32.683 16:23:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.683 [2024-04-23 16:23:31.439974] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:32.683 [2024-04-23 16:23:31.440120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175104 ] 00:24:32.683 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.683 [2024-04-23 16:23:31.574586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.943 [2024-04-23 16:23:31.669531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.202 16:23:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:33.202 16:23:32 -- common/autotest_common.sh@852 -- # return 0 00:24:33.202 16:23:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:33.461 [2024-04-23 16:23:32.225474] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.461 [2024-04-23 16:23:32.238492] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:33.461 [2024-04-23 16:23:32.238524] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:33.461 [2024-04-23 16:23:32.238561] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.461 [2024-04-23 16:23:32.239297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:24:33.461 [2024-04-23 16:23:32.240278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:24:33.461 [2024-04-23 16:23:32.241272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.461 [2024-04-23 16:23:32.241289] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:33.461 [2024-04-23 16:23:32.241301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.461 request: 00:24:33.461 { 00:24:33.461 "name": "TLSTEST", 00:24:33.461 "trtype": "tcp", 00:24:33.461 "traddr": "10.0.0.2", 00:24:33.461 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:33.461 "adrfam": "ipv4", 00:24:33.461 "trsvcid": "4420", 00:24:33.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.461 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:24:33.461 "method": "bdev_nvme_attach_controller", 00:24:33.461 "req_id": 1 00:24:33.461 } 00:24:33.461 Got JSON-RPC error response 00:24:33.461 response: 00:24:33.461 { 00:24:33.461 "code": -32602, 00:24:33.461 "message": "Invalid parameters" 00:24:33.461 } 00:24:33.461 16:23:32 -- target/tls.sh@36 -- # killprocess 3175104 00:24:33.461 16:23:32 -- common/autotest_common.sh@926 -- # '[' -z 3175104 ']' 00:24:33.461 16:23:32 -- common/autotest_common.sh@930 -- # kill -0 3175104 00:24:33.461 16:23:32 -- common/autotest_common.sh@931 -- # uname 00:24:33.461 16:23:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:33.461 16:23:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3175104 00:24:33.461 16:23:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:33.461 16:23:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:33.461 16:23:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3175104' 00:24:33.461 killing process with pid 3175104 00:24:33.461 16:23:32 -- common/autotest_common.sh@945 -- # kill 3175104 00:24:33.461 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.461 00:24:33.461 Latency(us) 00:24:33.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.461 =================================================================================================================== 00:24:33.461 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.461 16:23:32 -- common/autotest_common.sh@950 -- # wait 3175104 00:24:33.720 16:23:32 -- target/tls.sh@37 -- # return 1 00:24:33.720 16:23:32 -- common/autotest_common.sh@643 -- # es=1 00:24:33.720 16:23:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:33.720 16:23:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:33.720 16:23:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:33.979 16:23:32 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:33.979 16:23:32 -- common/autotest_common.sh@640 -- # local es=0 00:24:33.979 16:23:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:33.979 16:23:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:24:33.979 16:23:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:33.979 16:23:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:24:33.979 16:23:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:33.979 16:23:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:33.979 16:23:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.979 16:23:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:33.979 16:23:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.979 16:23:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:24:33.979 16:23:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.979 16:23:32 -- target/tls.sh@28 -- # bdevperf_pid=3175266 00:24:33.979 16:23:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.979 16:23:32 -- target/tls.sh@31 -- # waitforlisten 3175266 /var/tmp/bdevperf.sock 00:24:33.979 16:23:32 -- common/autotest_common.sh@819 -- # '[' -z 3175266 ']' 00:24:33.980 16:23:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.980 16:23:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:33.980 16:23:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.980 16:23:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:33.980 16:23:32 -- common/autotest_common.sh@10 -- # set +x 00:24:33.980 16:23:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.980 [2024-04-23 16:23:32.712273] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:33.980 [2024-04-23 16:23:32.712351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175266 ] 00:24:33.980 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.980 [2024-04-23 16:23:32.803319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.980 [2024-04-23 16:23:32.897957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.550 16:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:34.550 16:23:33 -- common/autotest_common.sh@852 -- # return 0 00:24:34.550 16:23:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt 00:24:34.812 [2024-04-23 16:23:33.564813] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.812 [2024-04-23 16:23:33.572644] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:34.812 [2024-04-23 16:23:33.572675] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:34.812 [2024-04-23 16:23:33.572715] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:34.812 [2024-04-23 16:23:33.573000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (107): Transport endpoint is not connected 00:24:34.812 [2024-04-23 16:23:33.573979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:24:34.812 [2024-04-23 16:23:33.574973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:34.812 [2024-04-23 16:23:33.574992] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:34.812 [2024-04-23 16:23:33.575007] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:34.812 request: 00:24:34.812 { 00:24:34.812 "name": "TLSTEST", 00:24:34.812 "trtype": "tcp", 00:24:34.812 "traddr": "10.0.0.2", 00:24:34.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.812 "adrfam": "ipv4", 00:24:34.812 "trsvcid": "4420", 00:24:34.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:34.812 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:24:34.812 "method": "bdev_nvme_attach_controller", 00:24:34.812 "req_id": 1 00:24:34.812 } 00:24:34.812 Got JSON-RPC error response 00:24:34.812 response: 00:24:34.812 { 00:24:34.812 "code": -32602, 00:24:34.812 "message": "Invalid parameters" 00:24:34.812 } 00:24:34.812 16:23:33 -- target/tls.sh@36 -- # killprocess 3175266 00:24:34.812 16:23:33 -- common/autotest_common.sh@926 -- # '[' -z 3175266 ']' 00:24:34.812 16:23:33 -- common/autotest_common.sh@930 -- # kill -0 3175266 00:24:34.812 16:23:33 -- common/autotest_common.sh@931 -- # uname 00:24:34.812 16:23:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:34.812 16:23:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3175266 00:24:34.812 16:23:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:34.812 16:23:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:34.812 16:23:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3175266' 00:24:34.812 killing process with pid 3175266 00:24:34.812 16:23:33 -- common/autotest_common.sh@945 -- # kill 3175266 00:24:34.812 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.812 00:24:34.812 Latency(us) 00:24:34.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.812 =================================================================================================================== 00:24:34.812 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:34.812 16:23:33 -- common/autotest_common.sh@950 -- # wait 3175266 00:24:35.379 16:23:34 -- target/tls.sh@37 -- # return 1 00:24:35.379 16:23:34 -- common/autotest_common.sh@643 -- # es=1 00:24:35.379 16:23:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:35.379 16:23:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:35.379 16:23:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:35.379 16:23:34 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:35.379 16:23:34 -- common/autotest_common.sh@640 -- # local es=0 00:24:35.379 16:23:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:35.379 16:23:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:24:35.379 16:23:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:35.379 16:23:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:24:35.379 16:23:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:35.379 16:23:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:35.379 16:23:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:35.379 16:23:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:35.379 16:23:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:35.379 16:23:34 -- target/tls.sh@23 -- # psk= 00:24:35.379 16:23:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.379 16:23:34 -- target/tls.sh@28 -- # bdevperf_pid=3175565 00:24:35.379 16:23:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.379 16:23:34 -- target/tls.sh@31 -- # waitforlisten 3175565 /var/tmp/bdevperf.sock 00:24:35.379 16:23:34 -- common/autotest_common.sh@819 -- # '[' -z 3175565 ']' 00:24:35.379 16:23:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.380 16:23:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:35.380 16:23:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.380 16:23:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:35.380 16:23:34 -- common/autotest_common.sh@10 -- # set +x 00:24:35.380 16:23:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:35.380 [2024-04-23 16:23:34.088685] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:35.380 [2024-04-23 16:23:34.088801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175565 ] 00:24:35.380 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.380 [2024-04-23 16:23:34.205318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.380 [2024-04-23 16:23:34.304124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.948 16:23:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.948 16:23:34 -- common/autotest_common.sh@852 -- # return 0 00:24:35.948 16:23:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:36.210 [2024-04-23 16:23:34.908933] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:36.210 [2024-04-23 16:23:34.911217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003140 (9): Bad file descriptor 00:24:36.210 [2024-04-23 16:23:34.912210] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.210 [2024-04-23 16:23:34.912228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:36.210 [2024-04-23 16:23:34.912245] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.210 request: 00:24:36.210 { 00:24:36.210 "name": "TLSTEST", 00:24:36.210 "trtype": "tcp", 00:24:36.210 "traddr": "10.0.0.2", 00:24:36.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.210 "adrfam": "ipv4", 00:24:36.210 "trsvcid": "4420", 00:24:36.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.210 "method": "bdev_nvme_attach_controller", 00:24:36.210 "req_id": 1 00:24:36.210 } 00:24:36.210 Got JSON-RPC error response 00:24:36.210 response: 00:24:36.210 { 00:24:36.210 "code": -32602, 00:24:36.210 "message": "Invalid parameters" 00:24:36.210 } 00:24:36.210 16:23:34 -- target/tls.sh@36 -- # killprocess 3175565 00:24:36.210 16:23:34 -- common/autotest_common.sh@926 -- # '[' -z 3175565 ']' 00:24:36.210 16:23:34 -- common/autotest_common.sh@930 -- # kill -0 3175565 00:24:36.210 16:23:34 -- common/autotest_common.sh@931 -- # uname 00:24:36.210 16:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:36.210 16:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3175565 00:24:36.210 16:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:36.210 16:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:36.210 16:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3175565' 00:24:36.210 killing process with pid 3175565 00:24:36.210 16:23:34 -- common/autotest_common.sh@945 -- # kill 3175565 00:24:36.210 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.210 00:24:36.210 Latency(us) 00:24:36.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.210 =================================================================================================================== 00:24:36.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:36.210 16:23:34 -- common/autotest_common.sh@950 -- # wait 3175565 00:24:36.472 16:23:35 -- target/tls.sh@37 -- # return 1 00:24:36.472 16:23:35 -- common/autotest_common.sh@643 -- # es=1 00:24:36.472 16:23:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:36.472 16:23:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:36.472 16:23:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:36.472 16:23:35 -- target/tls.sh@167 -- # killprocess 3169783 00:24:36.472 16:23:35 -- common/autotest_common.sh@926 -- # '[' -z 3169783 ']' 00:24:36.472 16:23:35 -- common/autotest_common.sh@930 -- # kill -0 3169783 00:24:36.472 16:23:35 -- common/autotest_common.sh@931 -- # uname 00:24:36.472 16:23:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:36.472 16:23:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3169783 00:24:36.472 16:23:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:36.472 16:23:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:36.472 16:23:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3169783' 00:24:36.472 killing process with pid 3169783 00:24:36.472 16:23:35 -- common/autotest_common.sh@945 -- # kill 3169783 00:24:36.472 16:23:35 -- common/autotest_common.sh@950 -- # wait 3169783 00:24:37.043 16:23:35 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:24:37.043 16:23:35 -- target/tls.sh@49 -- # local key hash crc 00:24:37.043 16:23:35 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:37.043 16:23:35 -- target/tls.sh@51 -- # hash=02 00:24:37.043 16:23:35 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:24:37.043 16:23:35 -- target/tls.sh@52 -- # gzip -1 -c 00:24:37.043 16:23:35 -- target/tls.sh@52 -- # tail -c8 00:24:37.043 16:23:35 -- target/tls.sh@52 -- # head -c 4 00:24:37.043 16:23:35 -- target/tls.sh@52 -- # crc='�e�'\''' 00:24:37.043 16:23:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:24:37.043 16:23:35 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:24:37.043 16:23:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:37.043 16:23:35 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:37.043 16:23:35 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:37.043 16:23:35 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:37.043 16:23:35 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:37.043 16:23:35 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:24:37.043 16:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:37.043 16:23:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:37.043 16:23:35 -- common/autotest_common.sh@10 -- # set +x 00:24:37.043 16:23:35 -- nvmf/common.sh@469 -- # nvmfpid=3176062 00:24:37.043 16:23:35 -- nvmf/common.sh@470 -- # waitforlisten 3176062 00:24:37.043 16:23:35 -- common/autotest_common.sh@819 -- # '[' -z 3176062 ']' 00:24:37.043 16:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.043 16:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:37.043 16:23:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:37.043 16:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.043 16:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:37.043 16:23:35 -- common/autotest_common.sh@10 -- # set +x 00:24:37.302 [2024-04-23 16:23:36.046379] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:37.302 [2024-04-23 16:23:36.046491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.302 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.303 [2024-04-23 16:23:36.166115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.561 [2024-04-23 16:23:36.261302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:37.561 [2024-04-23 16:23:36.261464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.561 [2024-04-23 16:23:36.261478] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.561 [2024-04-23 16:23:36.261487] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.561 [2024-04-23 16:23:36.261509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.820 16:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:37.820 16:23:36 -- common/autotest_common.sh@852 -- # return 0 00:24:37.820 16:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:37.820 16:23:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:37.820 16:23:36 -- common/autotest_common.sh@10 -- # set +x 00:24:38.082 16:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.082 16:23:36 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:38.082 16:23:36 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:38.082 16:23:36 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.082 [2024-04-23 16:23:36.878527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.082 16:23:36 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:38.343 16:23:37 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.343 [2024-04-23 16:23:37.166651] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.343 [2024-04-23 16:23:37.166893] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.343 16:23:37 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:38.602 malloc0 00:24:38.602 16:23:37 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:38.602 16:23:37 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:38.862 16:23:37 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:38.862 16:23:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:38.862 16:23:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:38.862 16:23:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:38.862 16:23:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:24:38.862 16:23:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.862 16:23:37 -- target/tls.sh@28 -- # bdevperf_pid=3176489 00:24:38.862 16:23:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:38.862 16:23:37 -- target/tls.sh@31 -- # waitforlisten 3176489 /var/tmp/bdevperf.sock 00:24:38.862 16:23:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:38.862 16:23:37 -- common/autotest_common.sh@819 -- # '[' -z 3176489 ']' 00:24:38.862 16:23:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.862 16:23:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:38.862 16:23:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.862 16:23:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:38.862 16:23:37 -- common/autotest_common.sh@10 -- # set +x 00:24:38.862 [2024-04-23 16:23:37.716117] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:38.862 [2024-04-23 16:23:37.716234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176489 ] 00:24:38.862 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.124 [2024-04-23 16:23:37.836049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.124 [2024-04-23 16:23:37.931209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.692 16:23:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:39.692 16:23:38 -- common/autotest_common.sh@852 -- # return 0 00:24:39.692 16:23:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:39.692 [2024-04-23 16:23:38.527354] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.692 TLSTESTn1 00:24:39.692 16:23:38 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:39.950 Running I/O for 10 seconds... 00:24:49.944 00:24:49.944 Latency(us) 00:24:49.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.944 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:49.944 Verification LBA range: start 0x0 length 0x2000 00:24:49.944 TLSTESTn1 : 10.02 3457.96 13.51 0.00 0.00 36977.73 7415.92 65398.03 00:24:49.944 =================================================================================================================== 00:24:49.944 Total : 3457.96 13.51 0.00 0.00 36977.73 7415.92 65398.03 00:24:49.944 0 00:24:49.944 16:23:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.944 16:23:48 -- target/tls.sh@45 -- # killprocess 3176489 00:24:49.944 16:23:48 -- common/autotest_common.sh@926 -- # '[' -z 3176489 ']' 00:24:49.944 16:23:48 -- common/autotest_common.sh@930 -- # kill -0 3176489 00:24:49.944 16:23:48 -- common/autotest_common.sh@931 -- # uname 00:24:49.944 16:23:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:49.944 16:23:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3176489 00:24:49.944 16:23:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:49.944 16:23:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:49.944 16:23:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3176489' 00:24:49.944 killing process with pid 3176489 00:24:49.944 16:23:48 -- common/autotest_common.sh@945 -- # kill 3176489 00:24:49.944 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.944 00:24:49.944 Latency(us) 00:24:49.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.944 =================================================================================================================== 00:24:49.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.944 16:23:48 -- common/autotest_common.sh@950 -- # wait 3176489 00:24:50.205 16:23:49 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:50.467 16:23:49 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:50.467 16:23:49 -- common/autotest_common.sh@640 -- # local es=0 00:24:50.467 16:23:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:50.467 16:23:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:24:50.467 16:23:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:50.467 16:23:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:24:50.467 16:23:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:50.467 16:23:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:50.467 16:23:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:50.467 16:23:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:50.467 16:23:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:50.467 16:23:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:24:50.467 16:23:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.467 16:23:49 -- target/tls.sh@28 -- # bdevperf_pid=3178594 00:24:50.467 16:23:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.467 16:23:49 -- target/tls.sh@31 -- # waitforlisten 3178594 /var/tmp/bdevperf.sock 00:24:50.467 16:23:49 -- common/autotest_common.sh@819 -- # '[' -z 3178594 ']' 00:24:50.467 16:23:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.467 16:23:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:50.467 16:23:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.467 16:23:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:50.467 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 16:23:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.467 [2024-04-23 16:23:49.228554] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:50.467 [2024-04-23 16:23:49.228733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178594 ] 00:24:50.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.467 [2024-04-23 16:23:49.364211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.729 [2024-04-23 16:23:49.460931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.303 16:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:51.303 16:23:49 -- common/autotest_common.sh@852 -- # return 0 00:24:51.303 16:23:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:51.303 [2024-04-23 16:23:50.079481] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.303 [2024-04-23 16:23:50.079543] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:51.303 request: 00:24:51.303 { 00:24:51.303 "name": "TLSTEST", 00:24:51.303 "trtype": "tcp", 00:24:51.303 "traddr": "10.0.0.2", 00:24:51.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.303 "adrfam": "ipv4", 00:24:51.303 "trsvcid": "4420", 00:24:51.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.303 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:24:51.303 "method": "bdev_nvme_attach_controller", 00:24:51.303 "req_id": 1 00:24:51.303 } 00:24:51.303 Got JSON-RPC error response 00:24:51.303 response: 00:24:51.303 { 00:24:51.303 "code": -22, 00:24:51.303 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:24:51.303 } 00:24:51.303 16:23:50 -- target/tls.sh@36 -- # killprocess 3178594 00:24:51.303 16:23:50 -- common/autotest_common.sh@926 -- # '[' -z 3178594 ']' 00:24:51.303 16:23:50 -- common/autotest_common.sh@930 -- # kill -0 3178594 00:24:51.303 16:23:50 -- common/autotest_common.sh@931 -- # uname 00:24:51.303 16:23:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:51.303 16:23:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3178594 00:24:51.303 16:23:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:51.303 16:23:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:51.303 16:23:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3178594' 00:24:51.303 killing process with pid 3178594 00:24:51.303 16:23:50 -- common/autotest_common.sh@945 -- # kill 3178594 00:24:51.303 Received shutdown signal, test time was about 10.000000 seconds 00:24:51.303 00:24:51.303 Latency(us) 00:24:51.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.303 =================================================================================================================== 00:24:51.303 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:51.303 16:23:50 -- common/autotest_common.sh@950 -- # wait 3178594 00:24:51.873 16:23:50 -- target/tls.sh@37 -- # return 1 00:24:51.873 16:23:50 -- common/autotest_common.sh@643 -- # es=1 00:24:51.873 16:23:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:51.873 16:23:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:51.873 16:23:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:51.873 16:23:50 -- target/tls.sh@183 -- # killprocess 3176062 00:24:51.873 16:23:50 -- common/autotest_common.sh@926 -- # '[' -z 3176062 ']' 00:24:51.873 16:23:50 -- common/autotest_common.sh@930 -- # kill -0 3176062 00:24:51.873 16:23:50 -- common/autotest_common.sh@931 -- # uname 00:24:51.873 16:23:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:51.873 16:23:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3176062 00:24:51.873 16:23:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:51.873 16:23:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:51.873 16:23:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3176062' 00:24:51.873 killing process with pid 3176062 00:24:51.873 16:23:50 -- common/autotest_common.sh@945 -- # kill 3176062 00:24:51.873 16:23:50 -- common/autotest_common.sh@950 -- # wait 3176062 00:24:52.132 16:23:51 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:52.132 16:23:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:52.132 16:23:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:52.132 16:23:51 -- common/autotest_common.sh@10 -- # set +x 00:24:52.132 16:23:51 -- nvmf/common.sh@469 -- # nvmfpid=3179057 00:24:52.132 16:23:51 -- nvmf/common.sh@470 -- # waitforlisten 3179057 00:24:52.132 16:23:51 -- common/autotest_common.sh@819 -- # '[' -z 3179057 ']' 00:24:52.132 16:23:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.132 16:23:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:52.132 16:23:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.132 16:23:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:52.132 16:23:51 -- common/autotest_common.sh@10 -- # set +x 00:24:52.132 16:23:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.392 [2024-04-23 16:23:51.124016] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:52.392 [2024-04-23 16:23:51.124130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.392 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.393 [2024-04-23 16:23:51.258635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.654 [2024-04-23 16:23:51.354489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:52.654 [2024-04-23 16:23:51.354696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.654 [2024-04-23 16:23:51.354711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.654 [2024-04-23 16:23:51.354721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.654 [2024-04-23 16:23:51.354760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.916 16:23:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:52.916 16:23:51 -- common/autotest_common.sh@852 -- # return 0 00:24:52.916 16:23:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:52.916 16:23:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:52.916 16:23:51 -- common/autotest_common.sh@10 -- # set +x 00:24:53.176 16:23:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.176 16:23:51 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:53.176 16:23:51 -- common/autotest_common.sh@640 -- # local es=0 00:24:53.176 16:23:51 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:53.176 16:23:51 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:24:53.176 16:23:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.176 16:23:51 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:24:53.176 16:23:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:53.176 16:23:51 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:53.176 16:23:51 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:53.176 16:23:51 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:53.176 [2024-04-23 16:23:52.005718] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.176 16:23:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:53.434 16:23:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:53.434 [2024-04-23 16:23:52.281761] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.434 [2024-04-23 16:23:52.281972] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.434 16:23:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:53.693 malloc0 00:24:53.693 16:23:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:53.693 16:23:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:53.952 [2024-04-23 16:23:52.718610] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:53.952 [2024-04-23 16:23:52.718655] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:53.952 [2024-04-23 16:23:52.718677] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:24:53.952 request: 00:24:53.952 { 00:24:53.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.952 "host": "nqn.2016-06.io.spdk:host1", 00:24:53.952 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:24:53.952 "method": "nvmf_subsystem_add_host", 00:24:53.952 "req_id": 1 00:24:53.952 } 00:24:53.952 Got JSON-RPC error response 00:24:53.952 response: 00:24:53.952 { 00:24:53.952 "code": -32603, 00:24:53.952 "message": "Internal error" 00:24:53.952 } 00:24:53.952 16:23:52 -- common/autotest_common.sh@643 -- # es=1 00:24:53.952 16:23:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:53.952 16:23:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:53.952 16:23:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:53.952 16:23:52 -- target/tls.sh@189 -- # killprocess 3179057 00:24:53.952 16:23:52 -- common/autotest_common.sh@926 -- # '[' -z 3179057 ']' 00:24:53.952 16:23:52 -- common/autotest_common.sh@930 -- # kill -0 3179057 00:24:53.952 16:23:52 -- common/autotest_common.sh@931 -- # uname 00:24:53.952 16:23:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:53.952 16:23:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3179057 00:24:53.952 16:23:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:53.952 16:23:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:53.952 16:23:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3179057' 00:24:53.952 killing process with pid 3179057 00:24:53.952 16:23:52 -- common/autotest_common.sh@945 -- # kill 3179057 00:24:53.952 16:23:52 -- common/autotest_common.sh@950 -- # wait 3179057 00:24:54.524 16:23:53 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:54.524 16:23:53 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:24:54.524 16:23:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:54.524 16:23:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.524 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:24:54.524 16:23:53 -- nvmf/common.sh@469 -- # nvmfpid=3179534 00:24:54.524 16:23:53 -- nvmf/common.sh@470 -- # waitforlisten 3179534 00:24:54.524 16:23:53 -- common/autotest_common.sh@819 -- # '[' -z 3179534 ']' 00:24:54.524 16:23:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.524 16:23:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:54.524 16:23:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.524 16:23:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:54.524 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:24:54.524 16:23:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.524 [2024-04-23 16:23:53.368217] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:54.524 [2024-04-23 16:23:53.368359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.785 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.785 [2024-04-23 16:23:53.511603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.785 [2024-04-23 16:23:53.607429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:54.785 [2024-04-23 16:23:53.607647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.785 [2024-04-23 16:23:53.607663] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.785 [2024-04-23 16:23:53.607672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.785 [2024-04-23 16:23:53.607702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.355 16:23:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.355 16:23:54 -- common/autotest_common.sh@852 -- # return 0 00:24:55.355 16:23:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:55.355 16:23:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.355 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:24:55.355 16:23:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.355 16:23:54 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:55.355 16:23:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:55.355 16:23:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:55.355 [2024-04-23 16:23:54.253764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.355 16:23:54 -- target/tls.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:55.616 16:23:54 -- target/tls.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:55.877 [2024-04-23 16:23:54.549821] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:55.878 [2024-04-23 16:23:54.550096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.878 16:23:54 -- target/tls.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:55.878 malloc0 00:24:55.878 16:23:54 -- target/tls.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:56.139 16:23:54 -- target/tls.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:56.139 16:23:55 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:56.139 16:23:55 -- target/tls.sh@197 -- # bdevperf_pid=3179863 00:24:56.139 16:23:55 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.139 16:23:55 -- target/tls.sh@200 -- # waitforlisten 3179863 /var/tmp/bdevperf.sock 00:24:56.139 16:23:55 -- common/autotest_common.sh@819 -- # '[' -z 3179863 ']' 00:24:56.139 16:23:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.139 16:23:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:56.139 16:23:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.139 16:23:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:56.139 16:23:55 -- common/autotest_common.sh@10 -- # set +x 00:24:56.399 [2024-04-23 16:23:55.120846] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:56.399 [2024-04-23 16:23:55.120996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179863 ] 00:24:56.399 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.399 [2024-04-23 16:23:55.252253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.659 [2024-04-23 16:23:55.344221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.921 16:23:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.921 16:23:55 -- common/autotest_common.sh@852 -- # return 0 00:24:56.921 16:23:55 -- target/tls.sh@201 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:24:57.181 [2024-04-23 16:23:55.978445] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.181 TLSTESTn1 00:24:57.181 16:23:56 -- target/tls.sh@205 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:24:57.443 16:23:56 -- target/tls.sh@205 -- # tgtconf='{ 00:24:57.443 "subsystems": [ 00:24:57.443 { 00:24:57.443 "subsystem": "iobuf", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "iobuf_set_options", 00:24:57.443 "params": { 00:24:57.443 "small_pool_count": 8192, 00:24:57.443 "large_pool_count": 1024, 00:24:57.443 "small_bufsize": 8192, 00:24:57.443 "large_bufsize": 135168 00:24:57.443 } 00:24:57.443 } 00:24:57.443 ] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "sock", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "sock_impl_set_options", 00:24:57.443 "params": { 00:24:57.443 "impl_name": "posix", 00:24:57.443 "recv_buf_size": 2097152, 00:24:57.443 "send_buf_size": 2097152, 00:24:57.443 "enable_recv_pipe": true, 00:24:57.443 "enable_quickack": false, 00:24:57.443 "enable_placement_id": 0, 00:24:57.443 "enable_zerocopy_send_server": true, 00:24:57.443 "enable_zerocopy_send_client": false, 00:24:57.443 "zerocopy_threshold": 0, 00:24:57.443 "tls_version": 0, 00:24:57.443 "enable_ktls": false 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "sock_impl_set_options", 00:24:57.443 "params": { 00:24:57.443 "impl_name": "ssl", 00:24:57.443 "recv_buf_size": 4096, 00:24:57.443 "send_buf_size": 4096, 00:24:57.443 "enable_recv_pipe": true, 00:24:57.443 "enable_quickack": false, 00:24:57.443 "enable_placement_id": 0, 00:24:57.443 "enable_zerocopy_send_server": true, 00:24:57.443 "enable_zerocopy_send_client": false, 00:24:57.443 "zerocopy_threshold": 0, 00:24:57.443 "tls_version": 0, 00:24:57.443 "enable_ktls": false 00:24:57.443 } 00:24:57.443 } 00:24:57.443 ] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "vmd", 00:24:57.443 "config": [] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "accel", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "accel_set_options", 00:24:57.443 "params": { 00:24:57.443 "small_cache_size": 128, 00:24:57.443 "large_cache_size": 16, 00:24:57.443 "task_count": 2048, 00:24:57.443 "sequence_count": 2048, 00:24:57.443 "buf_count": 2048 00:24:57.443 } 00:24:57.443 } 00:24:57.443 ] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "bdev", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "bdev_set_options", 00:24:57.443 "params": { 00:24:57.443 "bdev_io_pool_size": 65535, 00:24:57.443 "bdev_io_cache_size": 256, 00:24:57.443 "bdev_auto_examine": true, 00:24:57.443 "iobuf_small_cache_size": 128, 00:24:57.443 "iobuf_large_cache_size": 16 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_raid_set_options", 00:24:57.443 "params": { 00:24:57.443 "process_window_size_kb": 1024 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_iscsi_set_options", 00:24:57.443 "params": { 00:24:57.443 "timeout_sec": 30 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_nvme_set_options", 00:24:57.443 "params": { 00:24:57.443 "action_on_timeout": "none", 00:24:57.443 "timeout_us": 0, 00:24:57.443 "timeout_admin_us": 0, 00:24:57.443 "keep_alive_timeout_ms": 10000, 00:24:57.443 "transport_retry_count": 4, 00:24:57.443 "arbitration_burst": 0, 00:24:57.443 "low_priority_weight": 0, 00:24:57.443 "medium_priority_weight": 0, 00:24:57.443 "high_priority_weight": 0, 00:24:57.443 "nvme_adminq_poll_period_us": 10000, 00:24:57.443 "nvme_ioq_poll_period_us": 0, 00:24:57.443 "io_queue_requests": 0, 00:24:57.443 "delay_cmd_submit": true, 00:24:57.443 "bdev_retry_count": 3, 00:24:57.443 "transport_ack_timeout": 0, 00:24:57.443 "ctrlr_loss_timeout_sec": 0, 00:24:57.443 "reconnect_delay_sec": 0, 00:24:57.443 "fast_io_fail_timeout_sec": 0, 00:24:57.443 "generate_uuids": false, 00:24:57.443 "transport_tos": 0, 00:24:57.443 "io_path_stat": false, 00:24:57.443 "allow_accel_sequence": false 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_nvme_set_hotplug", 00:24:57.443 "params": { 00:24:57.443 "period_us": 100000, 00:24:57.443 "enable": false 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_malloc_create", 00:24:57.443 "params": { 00:24:57.443 "name": "malloc0", 00:24:57.443 "num_blocks": 8192, 00:24:57.443 "block_size": 4096, 00:24:57.443 "physical_block_size": 4096, 00:24:57.443 "uuid": "0698e311-9e74-4f7c-ac5e-a5ba502d6005", 00:24:57.443 "optimal_io_boundary": 0 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "bdev_wait_for_examine" 00:24:57.443 } 00:24:57.443 ] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "nbd", 00:24:57.443 "config": [] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "scheduler", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "framework_set_scheduler", 00:24:57.443 "params": { 00:24:57.443 "name": "static" 00:24:57.443 } 00:24:57.443 } 00:24:57.443 ] 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "subsystem": "nvmf", 00:24:57.443 "config": [ 00:24:57.443 { 00:24:57.443 "method": "nvmf_set_config", 00:24:57.443 "params": { 00:24:57.443 "discovery_filter": "match_any", 00:24:57.443 "admin_cmd_passthru": { 00:24:57.443 "identify_ctrlr": false 00:24:57.443 } 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "nvmf_set_max_subsystems", 00:24:57.443 "params": { 00:24:57.443 "max_subsystems": 1024 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "nvmf_set_crdt", 00:24:57.443 "params": { 00:24:57.443 "crdt1": 0, 00:24:57.443 "crdt2": 0, 00:24:57.443 "crdt3": 0 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "nvmf_create_transport", 00:24:57.443 "params": { 00:24:57.443 "trtype": "TCP", 00:24:57.443 "max_queue_depth": 128, 00:24:57.443 "max_io_qpairs_per_ctrlr": 127, 00:24:57.443 "in_capsule_data_size": 4096, 00:24:57.443 "max_io_size": 131072, 00:24:57.443 "io_unit_size": 131072, 00:24:57.443 "max_aq_depth": 128, 00:24:57.443 "num_shared_buffers": 511, 00:24:57.443 "buf_cache_size": 4294967295, 00:24:57.443 "dif_insert_or_strip": false, 00:24:57.443 "zcopy": false, 00:24:57.443 "c2h_success": false, 00:24:57.443 "sock_priority": 0, 00:24:57.443 "abort_timeout_sec": 1 00:24:57.443 } 00:24:57.443 }, 00:24:57.443 { 00:24:57.443 "method": "nvmf_create_subsystem", 00:24:57.443 "params": { 00:24:57.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.443 "allow_any_host": false, 00:24:57.444 "serial_number": "SPDK00000000000001", 00:24:57.444 "model_number": "SPDK bdev Controller", 00:24:57.444 "max_namespaces": 10, 00:24:57.444 "min_cntlid": 1, 00:24:57.444 "max_cntlid": 65519, 00:24:57.444 "ana_reporting": false 00:24:57.444 } 00:24:57.444 }, 00:24:57.444 { 00:24:57.444 "method": "nvmf_subsystem_add_host", 00:24:57.444 "params": { 00:24:57.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.444 "host": "nqn.2016-06.io.spdk:host1", 00:24:57.444 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:24:57.444 } 00:24:57.444 }, 00:24:57.444 { 00:24:57.444 "method": "nvmf_subsystem_add_ns", 00:24:57.444 "params": { 00:24:57.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.444 "namespace": { 00:24:57.444 "nsid": 1, 00:24:57.444 "bdev_name": "malloc0", 00:24:57.444 "nguid": "0698E3119E744F7CAC5EA5BA502D6005", 00:24:57.444 "uuid": "0698e311-9e74-4f7c-ac5e-a5ba502d6005" 00:24:57.444 } 00:24:57.444 } 00:24:57.444 }, 00:24:57.444 { 00:24:57.444 "method": "nvmf_subsystem_add_listener", 00:24:57.444 "params": { 00:24:57.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.444 "listen_address": { 00:24:57.444 "trtype": "TCP", 00:24:57.444 "adrfam": "IPv4", 00:24:57.444 "traddr": "10.0.0.2", 00:24:57.444 "trsvcid": "4420" 00:24:57.444 }, 00:24:57.444 "secure_channel": true 00:24:57.444 } 00:24:57.444 } 00:24:57.444 ] 00:24:57.444 } 00:24:57.444 ] 00:24:57.444 }' 00:24:57.444 16:23:56 -- target/tls.sh@206 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:57.704 16:23:56 -- target/tls.sh@206 -- # bdevperfconf='{ 00:24:57.704 "subsystems": [ 00:24:57.704 { 00:24:57.704 "subsystem": "iobuf", 00:24:57.704 "config": [ 00:24:57.704 { 00:24:57.704 "method": "iobuf_set_options", 00:24:57.704 "params": { 00:24:57.704 "small_pool_count": 8192, 00:24:57.704 "large_pool_count": 1024, 00:24:57.704 "small_bufsize": 8192, 00:24:57.704 "large_bufsize": 135168 00:24:57.704 } 00:24:57.704 } 00:24:57.704 ] 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "subsystem": "sock", 00:24:57.704 "config": [ 00:24:57.704 { 00:24:57.704 "method": "sock_impl_set_options", 00:24:57.704 "params": { 00:24:57.704 "impl_name": "posix", 00:24:57.704 "recv_buf_size": 2097152, 00:24:57.704 "send_buf_size": 2097152, 00:24:57.704 "enable_recv_pipe": true, 00:24:57.704 "enable_quickack": false, 00:24:57.704 "enable_placement_id": 0, 00:24:57.704 "enable_zerocopy_send_server": true, 00:24:57.704 "enable_zerocopy_send_client": false, 00:24:57.704 "zerocopy_threshold": 0, 00:24:57.704 "tls_version": 0, 00:24:57.704 "enable_ktls": false 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "sock_impl_set_options", 00:24:57.704 "params": { 00:24:57.704 "impl_name": "ssl", 00:24:57.704 "recv_buf_size": 4096, 00:24:57.704 "send_buf_size": 4096, 00:24:57.704 "enable_recv_pipe": true, 00:24:57.704 "enable_quickack": false, 00:24:57.704 "enable_placement_id": 0, 00:24:57.704 "enable_zerocopy_send_server": true, 00:24:57.704 "enable_zerocopy_send_client": false, 00:24:57.704 "zerocopy_threshold": 0, 00:24:57.704 "tls_version": 0, 00:24:57.704 "enable_ktls": false 00:24:57.704 } 00:24:57.704 } 00:24:57.704 ] 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "subsystem": "vmd", 00:24:57.704 "config": [] 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "subsystem": "accel", 00:24:57.704 "config": [ 00:24:57.704 { 00:24:57.704 "method": "accel_set_options", 00:24:57.704 "params": { 00:24:57.704 "small_cache_size": 128, 00:24:57.704 "large_cache_size": 16, 00:24:57.704 "task_count": 2048, 00:24:57.704 "sequence_count": 2048, 00:24:57.704 "buf_count": 2048 00:24:57.704 } 00:24:57.704 } 00:24:57.704 ] 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "subsystem": "bdev", 00:24:57.704 "config": [ 00:24:57.704 { 00:24:57.704 "method": "bdev_set_options", 00:24:57.704 "params": { 00:24:57.704 "bdev_io_pool_size": 65535, 00:24:57.704 "bdev_io_cache_size": 256, 00:24:57.704 "bdev_auto_examine": true, 00:24:57.704 "iobuf_small_cache_size": 128, 00:24:57.704 "iobuf_large_cache_size": 16 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_raid_set_options", 00:24:57.704 "params": { 00:24:57.704 "process_window_size_kb": 1024 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_iscsi_set_options", 00:24:57.704 "params": { 00:24:57.704 "timeout_sec": 30 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_nvme_set_options", 00:24:57.704 "params": { 00:24:57.704 "action_on_timeout": "none", 00:24:57.704 "timeout_us": 0, 00:24:57.704 "timeout_admin_us": 0, 00:24:57.704 "keep_alive_timeout_ms": 10000, 00:24:57.704 "transport_retry_count": 4, 00:24:57.704 "arbitration_burst": 0, 00:24:57.704 "low_priority_weight": 0, 00:24:57.704 "medium_priority_weight": 0, 00:24:57.704 "high_priority_weight": 0, 00:24:57.704 "nvme_adminq_poll_period_us": 10000, 00:24:57.704 "nvme_ioq_poll_period_us": 0, 00:24:57.704 "io_queue_requests": 512, 00:24:57.704 "delay_cmd_submit": true, 00:24:57.704 "bdev_retry_count": 3, 00:24:57.704 "transport_ack_timeout": 0, 00:24:57.704 "ctrlr_loss_timeout_sec": 0, 00:24:57.704 "reconnect_delay_sec": 0, 00:24:57.704 "fast_io_fail_timeout_sec": 0, 00:24:57.704 "generate_uuids": false, 00:24:57.704 "transport_tos": 0, 00:24:57.704 "io_path_stat": false, 00:24:57.704 "allow_accel_sequence": false 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_nvme_attach_controller", 00:24:57.704 "params": { 00:24:57.704 "name": "TLSTEST", 00:24:57.704 "trtype": "TCP", 00:24:57.704 "adrfam": "IPv4", 00:24:57.704 "traddr": "10.0.0.2", 00:24:57.704 "trsvcid": "4420", 00:24:57.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.704 "prchk_reftag": false, 00:24:57.704 "prchk_guard": false, 00:24:57.704 "ctrlr_loss_timeout_sec": 0, 00:24:57.704 "reconnect_delay_sec": 0, 00:24:57.704 "fast_io_fail_timeout_sec": 0, 00:24:57.704 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:24:57.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.704 "hdgst": false, 00:24:57.704 "ddgst": false 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_nvme_set_hotplug", 00:24:57.704 "params": { 00:24:57.704 "period_us": 100000, 00:24:57.704 "enable": false 00:24:57.704 } 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "method": "bdev_wait_for_examine" 00:24:57.704 } 00:24:57.704 ] 00:24:57.704 }, 00:24:57.704 { 00:24:57.704 "subsystem": "nbd", 00:24:57.704 "config": [] 00:24:57.704 } 00:24:57.704 ] 00:24:57.704 }' 00:24:57.704 16:23:56 -- target/tls.sh@208 -- # killprocess 3179863 00:24:57.704 16:23:56 -- common/autotest_common.sh@926 -- # '[' -z 3179863 ']' 00:24:57.704 16:23:56 -- common/autotest_common.sh@930 -- # kill -0 3179863 00:24:57.704 16:23:56 -- common/autotest_common.sh@931 -- # uname 00:24:57.704 16:23:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.704 16:23:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3179863 00:24:57.704 16:23:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:57.704 16:23:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:57.704 16:23:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3179863' 00:24:57.704 killing process with pid 3179863 00:24:57.704 16:23:56 -- common/autotest_common.sh@945 -- # kill 3179863 00:24:57.704 Received shutdown signal, test time was about 10.000000 seconds 00:24:57.704 00:24:57.704 Latency(us) 00:24:57.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.704 =================================================================================================================== 00:24:57.704 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:57.704 16:23:56 -- common/autotest_common.sh@950 -- # wait 3179863 00:24:58.270 16:23:56 -- target/tls.sh@209 -- # killprocess 3179534 00:24:58.270 16:23:56 -- common/autotest_common.sh@926 -- # '[' -z 3179534 ']' 00:24:58.270 16:23:56 -- common/autotest_common.sh@930 -- # kill -0 3179534 00:24:58.270 16:23:56 -- common/autotest_common.sh@931 -- # uname 00:24:58.270 16:23:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:58.270 16:23:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3179534 00:24:58.270 16:23:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:58.270 16:23:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:58.270 16:23:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3179534' 00:24:58.270 killing process with pid 3179534 00:24:58.270 16:23:56 -- common/autotest_common.sh@945 -- # kill 3179534 00:24:58.270 16:23:56 -- common/autotest_common.sh@950 -- # wait 3179534 00:24:58.530 16:23:57 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:58.530 16:23:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:58.530 16:23:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:58.530 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:24:58.530 16:23:57 -- target/tls.sh@212 -- # echo '{ 00:24:58.530 "subsystems": [ 00:24:58.530 { 00:24:58.530 "subsystem": "iobuf", 00:24:58.530 "config": [ 00:24:58.530 { 00:24:58.530 "method": "iobuf_set_options", 00:24:58.530 "params": { 00:24:58.530 "small_pool_count": 8192, 00:24:58.530 "large_pool_count": 1024, 00:24:58.530 "small_bufsize": 8192, 00:24:58.530 "large_bufsize": 135168 00:24:58.530 } 00:24:58.530 } 00:24:58.530 ] 00:24:58.530 }, 00:24:58.530 { 00:24:58.530 "subsystem": "sock", 00:24:58.530 "config": [ 00:24:58.530 { 00:24:58.530 "method": "sock_impl_set_options", 00:24:58.530 "params": { 00:24:58.530 "impl_name": "posix", 00:24:58.530 "recv_buf_size": 2097152, 00:24:58.530 "send_buf_size": 2097152, 00:24:58.530 "enable_recv_pipe": true, 00:24:58.530 "enable_quickack": false, 00:24:58.530 "enable_placement_id": 0, 00:24:58.530 "enable_zerocopy_send_server": true, 00:24:58.530 "enable_zerocopy_send_client": false, 00:24:58.530 "zerocopy_threshold": 0, 00:24:58.530 "tls_version": 0, 00:24:58.530 "enable_ktls": false 00:24:58.530 } 00:24:58.530 }, 00:24:58.530 { 00:24:58.530 "method": "sock_impl_set_options", 00:24:58.530 "params": { 00:24:58.530 "impl_name": "ssl", 00:24:58.530 "recv_buf_size": 4096, 00:24:58.530 "send_buf_size": 4096, 00:24:58.530 "enable_recv_pipe": true, 00:24:58.530 "enable_quickack": false, 00:24:58.530 "enable_placement_id": 0, 00:24:58.530 "enable_zerocopy_send_server": true, 00:24:58.530 "enable_zerocopy_send_client": false, 00:24:58.530 "zerocopy_threshold": 0, 00:24:58.530 "tls_version": 0, 00:24:58.530 "enable_ktls": false 00:24:58.530 } 00:24:58.530 } 00:24:58.530 ] 00:24:58.530 }, 00:24:58.530 { 00:24:58.530 "subsystem": "vmd", 00:24:58.530 "config": [] 00:24:58.530 }, 00:24:58.530 { 00:24:58.530 "subsystem": "accel", 00:24:58.530 "config": [ 00:24:58.530 { 00:24:58.530 "method": "accel_set_options", 00:24:58.530 "params": { 00:24:58.530 "small_cache_size": 128, 00:24:58.530 "large_cache_size": 16, 00:24:58.530 "task_count": 2048, 00:24:58.530 "sequence_count": 2048, 00:24:58.530 "buf_count": 2048 00:24:58.530 } 00:24:58.530 } 00:24:58.530 ] 00:24:58.530 }, 00:24:58.530 { 00:24:58.530 "subsystem": "bdev", 00:24:58.530 "config": [ 00:24:58.530 { 00:24:58.530 "method": "bdev_set_options", 00:24:58.531 "params": { 00:24:58.531 "bdev_io_pool_size": 65535, 00:24:58.531 "bdev_io_cache_size": 256, 00:24:58.531 "bdev_auto_examine": true, 00:24:58.531 "iobuf_small_cache_size": 128, 00:24:58.531 "iobuf_large_cache_size": 16 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_raid_set_options", 00:24:58.531 "params": { 00:24:58.531 "process_window_size_kb": 1024 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_iscsi_set_options", 00:24:58.531 "params": { 00:24:58.531 "timeout_sec": 30 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_nvme_set_options", 00:24:58.531 "params": { 00:24:58.531 "action_on_timeout": "none", 00:24:58.531 "timeout_us": 0, 00:24:58.531 "timeout_admin_us": 0, 00:24:58.531 "keep_alive_timeout_ms": 10000, 00:24:58.531 "transport_retry_count": 4, 00:24:58.531 "arbitration_burst": 0, 00:24:58.531 "low_priority_weight": 0, 00:24:58.531 "medium_priority_weight": 0, 00:24:58.531 "high_priority_weight": 0, 00:24:58.531 "nvme_adminq_poll_period_us": 10000, 00:24:58.531 "nvme_ioq_poll_period_us": 0, 00:24:58.531 "io_queue_requests": 0, 00:24:58.531 "delay_cmd_submit": true, 00:24:58.531 "bdev_retry_count": 3, 00:24:58.531 "transport_ack_timeout": 0, 00:24:58.531 "ctrlr_loss_timeout_sec": 0, 00:24:58.531 "reconnect_delay_sec": 0, 00:24:58.531 "fast_io_fail_timeout_sec": 0, 00:24:58.531 "generate_uuids": false, 00:24:58.531 "transport_tos": 0, 00:24:58.531 "io_path_stat": false, 00:24:58.531 "allow_accel_sequence": false 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_nvme_set_hotplug", 00:24:58.531 "params": { 00:24:58.531 "period_us": 100000, 00:24:58.531 "enable": false 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_malloc_create", 00:24:58.531 "params": { 00:24:58.531 "name": "malloc0", 00:24:58.531 "num_blocks": 8192, 00:24:58.531 "block_size": 4096, 00:24:58.531 "physical_block_size": 4096, 00:24:58.531 "uuid": "0698e311-9e74-4f7c-ac5e-a5ba502d6005", 00:24:58.531 "optimal_io_boundary": 0 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "bdev_wait_for_examine" 00:24:58.531 } 00:24:58.531 ] 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "subsystem": "nbd", 00:24:58.531 "config": [] 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "subsystem": "scheduler", 00:24:58.531 "config": [ 00:24:58.531 { 00:24:58.531 "method": "framework_set_scheduler", 00:24:58.531 "params": { 00:24:58.531 "name": "static" 00:24:58.531 } 00:24:58.531 } 00:24:58.531 ] 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "subsystem": "nvmf", 00:24:58.531 "config": [ 00:24:58.531 { 00:24:58.531 "method": "nvmf_set_config", 00:24:58.531 "params": { 00:24:58.531 "discovery_filter": "match_any", 00:24:58.531 "admin_cmd_passthru": { 00:24:58.531 "identify_ctrlr": false 00:24:58.531 } 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_set_max_subsystems", 00:24:58.531 "params": { 00:24:58.531 "max_subsystems": 1024 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_set_crdt", 00:24:58.531 "params": { 00:24:58.531 "crdt1": 0, 00:24:58.531 "crdt2": 0, 00:24:58.531 "crdt3": 0 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_create_transport", 00:24:58.531 "params": { 00:24:58.531 "trtype": "TCP", 00:24:58.531 "max_queue_depth": 128, 00:24:58.531 "max_io_qpairs_per_ctrlr": 127, 00:24:58.531 "in_capsule_data_size": 4096, 00:24:58.531 "max_io_size": 131072, 00:24:58.531 "io_unit_size": 131072, 00:24:58.531 "max_aq_depth": 128, 00:24:58.531 "num_shared_buffers": 511, 00:24:58.531 "buf_cache_size": 4294967295, 00:24:58.531 "dif_insert_or_strip": false, 00:24:58.531 "zcopy": false, 00:24:58.531 "c2h_success": false, 00:24:58.531 "sock_priority": 0, 00:24:58.531 "abort_timeout_sec": 1 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_create_subsystem", 00:24:58.531 "params": { 00:24:58.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.531 "allow_any_host": false, 00:24:58.531 "serial_number": "SPDK00000000000001", 00:24:58.531 "model_number": "SPDK bdev Controller", 00:24:58.531 "max_namespaces": 10, 00:24:58.531 "min_cntlid": 1, 00:24:58.531 "max_cntlid": 65519, 00:24:58.531 "ana_reporting": false 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_subsystem_add_host", 00:24:58.531 "params": { 00:24:58.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.531 "host": "nqn.2016-06.io.spdk:host1", 00:24:58.531 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_subsystem_add_ns", 00:24:58.531 "params": { 00:24:58.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.531 "namespace": { 00:24:58.531 "nsid": 1, 00:24:58.531 "bdev_name": "malloc0", 00:24:58.531 "nguid": "0698E3119E744F7CAC5EA5BA502D6005", 00:24:58.531 "uuid": "0698e311-9e74-4f7c-ac5e-a5ba502d6005" 00:24:58.531 } 00:24:58.531 } 00:24:58.531 }, 00:24:58.531 { 00:24:58.531 "method": "nvmf_subsystem_add_listener", 00:24:58.531 "params": { 00:24:58.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.531 "listen_address": { 00:24:58.531 "trtype": "TCP", 00:24:58.531 "adrfam": "IPv4", 00:24:58.531 "traddr": "10.0.0.2", 00:24:58.531 "trsvcid": "4420" 00:24:58.531 }, 00:24:58.531 "secure_channel": true 00:24:58.531 } 00:24:58.531 } 00:24:58.531 ] 00:24:58.531 } 00:24:58.531 ] 00:24:58.531 }' 00:24:58.531 16:23:57 -- nvmf/common.sh@469 -- # nvmfpid=3180474 00:24:58.531 16:23:57 -- nvmf/common.sh@470 -- # waitforlisten 3180474 00:24:58.531 16:23:57 -- common/autotest_common.sh@819 -- # '[' -z 3180474 ']' 00:24:58.531 16:23:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.531 16:23:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:58.531 16:23:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.531 16:23:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:58.531 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:24:58.531 16:23:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:58.792 [2024-04-23 16:23:57.512342] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:58.792 [2024-04-23 16:23:57.512419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.792 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.792 [2024-04-23 16:23:57.603936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.792 [2024-04-23 16:23:57.699632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.792 [2024-04-23 16:23:57.699804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.792 [2024-04-23 16:23:57.699818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.792 [2024-04-23 16:23:57.699828] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.792 [2024-04-23 16:23:57.699860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.053 [2024-04-23 16:23:57.970320] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.314 [2024-04-23 16:23:58.009641] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.314 [2024-04-23 16:23:58.009902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.314 16:23:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.314 16:23:58 -- common/autotest_common.sh@852 -- # return 0 00:24:59.314 16:23:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:59.314 16:23:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:59.314 16:23:58 -- common/autotest_common.sh@10 -- # set +x 00:24:59.314 16:23:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.314 16:23:58 -- target/tls.sh@216 -- # bdevperf_pid=3180497 00:24:59.314 16:23:58 -- target/tls.sh@217 -- # waitforlisten 3180497 /var/tmp/bdevperf.sock 00:24:59.314 16:23:58 -- common/autotest_common.sh@819 -- # '[' -z 3180497 ']' 00:24:59.314 16:23:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.314 16:23:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:59.314 16:23:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.314 16:23:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:59.314 16:23:58 -- common/autotest_common.sh@10 -- # set +x 00:24:59.314 16:23:58 -- target/tls.sh@213 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:59.314 16:23:58 -- target/tls.sh@213 -- # echo '{ 00:24:59.314 "subsystems": [ 00:24:59.314 { 00:24:59.314 "subsystem": "iobuf", 00:24:59.314 "config": [ 00:24:59.314 { 00:24:59.314 "method": "iobuf_set_options", 00:24:59.314 "params": { 00:24:59.314 "small_pool_count": 8192, 00:24:59.314 "large_pool_count": 1024, 00:24:59.314 "small_bufsize": 8192, 00:24:59.314 "large_bufsize": 135168 00:24:59.314 } 00:24:59.314 } 00:24:59.314 ] 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "subsystem": "sock", 00:24:59.314 "config": [ 00:24:59.314 { 00:24:59.314 "method": "sock_impl_set_options", 00:24:59.314 "params": { 00:24:59.314 "impl_name": "posix", 00:24:59.314 "recv_buf_size": 2097152, 00:24:59.314 "send_buf_size": 2097152, 00:24:59.314 "enable_recv_pipe": true, 00:24:59.314 "enable_quickack": false, 00:24:59.314 "enable_placement_id": 0, 00:24:59.314 "enable_zerocopy_send_server": true, 00:24:59.314 "enable_zerocopy_send_client": false, 00:24:59.314 "zerocopy_threshold": 0, 00:24:59.314 "tls_version": 0, 00:24:59.314 "enable_ktls": false 00:24:59.314 } 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "method": "sock_impl_set_options", 00:24:59.314 "params": { 00:24:59.314 "impl_name": "ssl", 00:24:59.314 "recv_buf_size": 4096, 00:24:59.314 "send_buf_size": 4096, 00:24:59.314 "enable_recv_pipe": true, 00:24:59.314 "enable_quickack": false, 00:24:59.314 "enable_placement_id": 0, 00:24:59.314 "enable_zerocopy_send_server": true, 00:24:59.314 "enable_zerocopy_send_client": false, 00:24:59.314 "zerocopy_threshold": 0, 00:24:59.314 "tls_version": 0, 00:24:59.314 "enable_ktls": false 00:24:59.314 } 00:24:59.314 } 00:24:59.314 ] 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "subsystem": "vmd", 00:24:59.314 "config": [] 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "subsystem": "accel", 00:24:59.314 "config": [ 00:24:59.314 { 00:24:59.314 "method": "accel_set_options", 00:24:59.314 "params": { 00:24:59.314 "small_cache_size": 128, 00:24:59.314 "large_cache_size": 16, 00:24:59.314 "task_count": 2048, 00:24:59.314 "sequence_count": 2048, 00:24:59.314 "buf_count": 2048 00:24:59.314 } 00:24:59.314 } 00:24:59.314 ] 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "subsystem": "bdev", 00:24:59.314 "config": [ 00:24:59.314 { 00:24:59.314 "method": "bdev_set_options", 00:24:59.314 "params": { 00:24:59.314 "bdev_io_pool_size": 65535, 00:24:59.314 "bdev_io_cache_size": 256, 00:24:59.314 "bdev_auto_examine": true, 00:24:59.314 "iobuf_small_cache_size": 128, 00:24:59.314 "iobuf_large_cache_size": 16 00:24:59.314 } 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "method": "bdev_raid_set_options", 00:24:59.314 "params": { 00:24:59.314 "process_window_size_kb": 1024 00:24:59.314 } 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "method": "bdev_iscsi_set_options", 00:24:59.314 "params": { 00:24:59.314 "timeout_sec": 30 00:24:59.314 } 00:24:59.314 }, 00:24:59.314 { 00:24:59.314 "method": "bdev_nvme_set_options", 00:24:59.314 "params": { 00:24:59.314 "action_on_timeout": "none", 00:24:59.314 "timeout_us": 0, 00:24:59.314 "timeout_admin_us": 0, 00:24:59.314 "keep_alive_timeout_ms": 10000, 00:24:59.314 "transport_retry_count": 4, 00:24:59.314 "arbitration_burst": 0, 00:24:59.314 "low_priority_weight": 0, 00:24:59.314 "medium_priority_weight": 0, 00:24:59.314 "high_priority_weight": 0, 00:24:59.315 "nvme_adminq_poll_period_us": 10000, 00:24:59.315 "nvme_ioq_poll_period_us": 0, 00:24:59.315 "io_queue_requests": 512, 00:24:59.315 "delay_cmd_submit": true, 00:24:59.315 "bdev_retry_count": 3, 00:24:59.315 "transport_ack_timeout": 0, 00:24:59.315 "ctrlr_loss_timeout_sec": 0, 00:24:59.315 "reconnect_delay_sec": 0, 00:24:59.315 "fast_io_fail_timeout_sec": 0, 00:24:59.315 "generate_uuids": false, 00:24:59.315 "transport_tos": 0, 00:24:59.315 "io_path_stat": false, 00:24:59.315 "allow_accel_sequence": false 00:24:59.315 } 00:24:59.315 }, 00:24:59.315 { 00:24:59.315 "method": "bdev_nvme_attach_controller", 00:24:59.315 "params": { 00:24:59.315 "name": "TLSTEST", 00:24:59.315 "trtype": "TCP", 00:24:59.315 "adrfam": "IPv4", 00:24:59.315 "traddr": "10.0.0.2", 00:24:59.315 "trsvcid": "4420", 00:24:59.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.315 "prchk_reftag": false, 00:24:59.315 "prchk_guard": false, 00:24:59.315 "ctrlr_loss_timeout_sec": 0, 00:24:59.315 "reconnect_delay_sec": 0, 00:24:59.315 "fast_io_fail_timeout_sec": 0, 00:24:59.315 "psk": "/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:24:59.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.315 "hdgst": false, 00:24:59.315 "ddgst": false 00:24:59.315 } 00:24:59.315 }, 00:24:59.315 { 00:24:59.315 "method": "bdev_nvme_set_hotplug", 00:24:59.315 "params": { 00:24:59.315 "period_us": 100000, 00:24:59.315 "enable": false 00:24:59.315 } 00:24:59.315 }, 00:24:59.315 { 00:24:59.315 "method": "bdev_wait_for_examine" 00:24:59.315 } 00:24:59.315 ] 00:24:59.315 }, 00:24:59.315 { 00:24:59.315 "subsystem": "nbd", 00:24:59.315 "config": [] 00:24:59.315 } 00:24:59.315 ] 00:24:59.315 }' 00:24:59.574 [2024-04-23 16:23:58.321373] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:24:59.574 [2024-04-23 16:23:58.321514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180497 ] 00:24:59.574 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.574 [2024-04-23 16:23:58.457556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.833 [2024-04-23 16:23:58.555231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.833 [2024-04-23 16:23:58.762241] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.091 16:23:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:00.091 16:23:58 -- common/autotest_common.sh@852 -- # return 0 00:25:00.091 16:23:59 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:00.350 Running I/O for 10 seconds... 00:25:10.338 00:25:10.338 Latency(us) 00:25:10.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.338 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:10.338 Verification LBA range: start 0x0 length 0x2000 00:25:10.338 TLSTESTn1 : 10.02 3432.25 13.41 0.00 0.00 37254.19 4018.39 65122.09 00:25:10.338 =================================================================================================================== 00:25:10.338 Total : 3432.25 13.41 0.00 0.00 37254.19 4018.39 65122.09 00:25:10.338 0 00:25:10.339 16:24:09 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.339 16:24:09 -- target/tls.sh@223 -- # killprocess 3180497 00:25:10.339 16:24:09 -- common/autotest_common.sh@926 -- # '[' -z 3180497 ']' 00:25:10.339 16:24:09 -- common/autotest_common.sh@930 -- # kill -0 3180497 00:25:10.339 16:24:09 -- common/autotest_common.sh@931 -- # uname 00:25:10.339 16:24:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:10.339 16:24:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3180497 00:25:10.339 16:24:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:10.339 16:24:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:10.339 16:24:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3180497' 00:25:10.339 killing process with pid 3180497 00:25:10.339 16:24:09 -- common/autotest_common.sh@945 -- # kill 3180497 00:25:10.339 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.339 00:25:10.339 Latency(us) 00:25:10.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.339 =================================================================================================================== 00:25:10.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.339 16:24:09 -- common/autotest_common.sh@950 -- # wait 3180497 00:25:10.598 16:24:09 -- target/tls.sh@224 -- # killprocess 3180474 00:25:10.598 16:24:09 -- common/autotest_common.sh@926 -- # '[' -z 3180474 ']' 00:25:10.598 16:24:09 -- common/autotest_common.sh@930 -- # kill -0 3180474 00:25:10.598 16:24:09 -- common/autotest_common.sh@931 -- # uname 00:25:10.598 16:24:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:10.598 16:24:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3180474 00:25:10.859 16:24:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:10.859 16:24:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:10.859 16:24:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3180474' 00:25:10.859 killing process with pid 3180474 00:25:10.859 16:24:09 -- common/autotest_common.sh@945 -- # kill 3180474 00:25:10.859 16:24:09 -- common/autotest_common.sh@950 -- # wait 3180474 00:25:11.431 16:24:10 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:25:11.431 16:24:10 -- target/tls.sh@227 -- # cleanup 00:25:11.431 16:24:10 -- target/tls.sh@15 -- # process_shm --id 0 00:25:11.431 16:24:10 -- common/autotest_common.sh@796 -- # type=--id 00:25:11.431 16:24:10 -- common/autotest_common.sh@797 -- # id=0 00:25:11.431 16:24:10 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:25:11.431 16:24:10 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:11.431 16:24:10 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:25:11.431 16:24:10 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:25:11.431 16:24:10 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:25:11.431 16:24:10 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:11.431 nvmf_trace.0 00:25:11.431 16:24:10 -- common/autotest_common.sh@811 -- # return 0 00:25:11.431 16:24:10 -- target/tls.sh@16 -- # killprocess 3180497 00:25:11.431 16:24:10 -- common/autotest_common.sh@926 -- # '[' -z 3180497 ']' 00:25:11.431 16:24:10 -- common/autotest_common.sh@930 -- # kill -0 3180497 00:25:11.431 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3180497) - No such process 00:25:11.431 16:24:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3180497 is not found' 00:25:11.431 Process with pid 3180497 is not found 00:25:11.431 16:24:10 -- target/tls.sh@17 -- # nvmftestfini 00:25:11.431 16:24:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:11.431 16:24:10 -- nvmf/common.sh@116 -- # sync 00:25:11.431 16:24:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:11.431 16:24:10 -- nvmf/common.sh@119 -- # set +e 00:25:11.431 16:24:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:11.431 16:24:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:11.431 rmmod nvme_tcp 00:25:11.431 rmmod nvme_fabrics 00:25:11.431 rmmod nvme_keyring 00:25:11.431 16:24:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:11.431 16:24:10 -- nvmf/common.sh@123 -- # set -e 00:25:11.431 16:24:10 -- nvmf/common.sh@124 -- # return 0 00:25:11.431 16:24:10 -- nvmf/common.sh@477 -- # '[' -n 3180474 ']' 00:25:11.431 16:24:10 -- nvmf/common.sh@478 -- # killprocess 3180474 00:25:11.431 16:24:10 -- common/autotest_common.sh@926 -- # '[' -z 3180474 ']' 00:25:11.431 16:24:10 -- common/autotest_common.sh@930 -- # kill -0 3180474 00:25:11.431 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3180474) - No such process 00:25:11.431 16:24:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3180474 is not found' 00:25:11.431 Process with pid 3180474 is not found 00:25:11.431 16:24:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:11.431 16:24:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:11.431 16:24:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:11.431 16:24:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.431 16:24:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:11.431 16:24:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.431 16:24:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.432 16:24:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.417 16:24:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:13.417 16:24:12 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:25:13.417 00:25:13.417 real 1m13.039s 00:25:13.417 user 1m46.292s 00:25:13.417 sys 0m26.903s 00:25:13.417 16:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.417 16:24:12 -- common/autotest_common.sh@10 -- # set +x 00:25:13.417 ************************************ 00:25:13.417 END TEST nvmf_tls 00:25:13.417 ************************************ 00:25:13.417 16:24:12 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:13.417 16:24:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:13.417 16:24:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:13.417 16:24:12 -- common/autotest_common.sh@10 -- # set +x 00:25:13.417 ************************************ 00:25:13.417 START TEST nvmf_fips 00:25:13.417 ************************************ 00:25:13.417 16:24:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:13.676 * Looking for test storage... 00:25:13.676 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:25:13.676 16:24:12 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.676 16:24:12 -- nvmf/common.sh@7 -- # uname -s 00:25:13.676 16:24:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.676 16:24:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.676 16:24:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.676 16:24:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.676 16:24:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.676 16:24:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.676 16:24:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.676 16:24:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.676 16:24:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.676 16:24:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.676 16:24:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:13.676 16:24:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:13.676 16:24:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.676 16:24:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.676 16:24:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:13.676 16:24:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:13.676 16:24:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.676 16:24:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.676 16:24:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.676 16:24:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.676 16:24:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.676 16:24:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.676 16:24:12 -- paths/export.sh@5 -- # export PATH 00:25:13.676 16:24:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.676 16:24:12 -- nvmf/common.sh@46 -- # : 0 00:25:13.676 16:24:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:13.676 16:24:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:13.676 16:24:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:13.676 16:24:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.676 16:24:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.676 16:24:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:13.676 16:24:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:13.676 16:24:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:13.676 16:24:12 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:13.676 16:24:12 -- fips/fips.sh@89 -- # check_openssl_version 00:25:13.676 16:24:12 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:13.676 16:24:12 -- fips/fips.sh@85 -- # openssl version 00:25:13.676 16:24:12 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:13.676 16:24:12 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:13.676 16:24:12 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:13.676 16:24:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:13.676 16:24:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:13.676 16:24:12 -- scripts/common.sh@335 -- # IFS=.-: 00:25:13.676 16:24:12 -- scripts/common.sh@335 -- # read -ra ver1 00:25:13.676 16:24:12 -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.676 16:24:12 -- scripts/common.sh@336 -- # read -ra ver2 00:25:13.676 16:24:12 -- scripts/common.sh@337 -- # local 'op=>=' 00:25:13.676 16:24:12 -- scripts/common.sh@339 -- # ver1_l=3 00:25:13.676 16:24:12 -- scripts/common.sh@340 -- # ver2_l=3 00:25:13.676 16:24:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:13.676 16:24:12 -- scripts/common.sh@343 -- # case "$op" in 00:25:13.676 16:24:12 -- scripts/common.sh@347 -- # : 1 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # decimal 3 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=3 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 3 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # ver1[v]=3 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # decimal 3 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=3 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 3 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # ver2[v]=3 00:25:13.676 16:24:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:13.676 16:24:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v++ )) 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # decimal 0 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=0 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 0 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # ver1[v]=0 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # decimal 0 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=0 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 0 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:13.676 16:24:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:13.676 16:24:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v++ )) 00:25:13.676 16:24:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # decimal 9 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=9 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 9 00:25:13.676 16:24:12 -- scripts/common.sh@364 -- # ver1[v]=9 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # decimal 0 00:25:13.676 16:24:12 -- scripts/common.sh@352 -- # local d=0 00:25:13.676 16:24:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:13.676 16:24:12 -- scripts/common.sh@354 -- # echo 0 00:25:13.676 16:24:12 -- scripts/common.sh@365 -- # ver2[v]=0 00:25:13.676 16:24:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:13.676 16:24:12 -- scripts/common.sh@366 -- # return 0 00:25:13.676 16:24:12 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:13.676 16:24:12 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:13.676 16:24:12 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:13.676 16:24:12 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:13.676 16:24:12 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:13.676 16:24:12 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:13.676 16:24:12 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:13.676 16:24:12 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:13.676 16:24:12 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:25:13.676 16:24:12 -- fips/fips.sh@114 -- # build_openssl_config 00:25:13.676 16:24:12 -- fips/fips.sh@37 -- # cat 00:25:13.676 16:24:12 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:13.676 16:24:12 -- fips/fips.sh@58 -- # cat - 00:25:13.676 16:24:12 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:13.676 16:24:12 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:13.677 16:24:12 -- fips/fips.sh@117 -- # mapfile -t providers 00:25:13.677 16:24:12 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:25:13.677 16:24:12 -- fips/fips.sh@117 -- # grep name 00:25:13.677 16:24:12 -- fips/fips.sh@117 -- # openssl list -providers 00:25:13.677 16:24:12 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:13.677 16:24:12 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:13.677 16:24:12 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:13.677 16:24:12 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:13.677 16:24:12 -- common/autotest_common.sh@640 -- # local es=0 00:25:13.677 16:24:12 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:13.677 16:24:12 -- common/autotest_common.sh@628 -- # local arg=openssl 00:25:13.677 16:24:12 -- fips/fips.sh@128 -- # : 00:25:13.677 16:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.677 16:24:12 -- common/autotest_common.sh@632 -- # type -t openssl 00:25:13.677 16:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.677 16:24:12 -- common/autotest_common.sh@634 -- # type -P openssl 00:25:13.677 16:24:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.677 16:24:12 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:25:13.677 16:24:12 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:25:13.677 16:24:12 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:25:13.677 Error setting digest 00:25:13.677 00B230D2CC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:13.677 00B230D2CC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:13.677 16:24:12 -- common/autotest_common.sh@643 -- # es=1 00:25:13.677 16:24:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:13.677 16:24:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:13.677 16:24:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:13.677 16:24:12 -- fips/fips.sh@131 -- # nvmftestinit 00:25:13.677 16:24:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:13.677 16:24:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.677 16:24:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:13.677 16:24:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:13.677 16:24:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:13.677 16:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.677 16:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.677 16:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.677 16:24:12 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:25:13.677 16:24:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:13.677 16:24:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:13.677 16:24:12 -- common/autotest_common.sh@10 -- # set +x 00:25:20.260 16:24:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:20.260 16:24:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:20.260 16:24:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:20.260 16:24:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:20.260 16:24:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:20.260 16:24:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:20.260 16:24:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:20.260 16:24:17 -- nvmf/common.sh@294 -- # net_devs=() 00:25:20.260 16:24:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:20.260 16:24:17 -- nvmf/common.sh@295 -- # e810=() 00:25:20.260 16:24:17 -- nvmf/common.sh@295 -- # local -ga e810 00:25:20.260 16:24:17 -- nvmf/common.sh@296 -- # x722=() 00:25:20.260 16:24:17 -- nvmf/common.sh@296 -- # local -ga x722 00:25:20.260 16:24:17 -- nvmf/common.sh@297 -- # mlx=() 00:25:20.260 16:24:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:20.260 16:24:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.260 16:24:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:20.260 16:24:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.260 16:24:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:20.260 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:20.260 16:24:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.260 16:24:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:20.260 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:20.260 16:24:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.260 16:24:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.260 16:24:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.260 16:24:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:20.260 Found net devices under 0000:27:00.0: cvl_0_0 00:25:20.260 16:24:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.260 16:24:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.260 16:24:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.260 16:24:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.260 16:24:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:20.260 Found net devices under 0000:27:00.1: cvl_0_1 00:25:20.260 16:24:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.260 16:24:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:20.260 16:24:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:20.260 16:24:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:20.260 16:24:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.260 16:24:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.260 16:24:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.260 16:24:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:20.260 16:24:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.260 16:24:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.260 16:24:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:20.260 16:24:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.260 16:24:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.260 16:24:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:20.260 16:24:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:20.260 16:24:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.260 16:24:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.260 16:24:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.260 16:24:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.260 16:24:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:20.260 16:24:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.261 16:24:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.261 16:24:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.261 16:24:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:25:20.261 00:25:20.261 --- 10.0.0.2 ping statistics --- 00:25:20.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.261 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:25:20.261 16:24:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:25:20.261 00:25:20.261 --- 10.0.0.1 ping statistics --- 00:25:20.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.261 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:25:20.261 16:24:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.261 16:24:18 -- nvmf/common.sh@410 -- # return 0 00:25:20.261 16:24:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.261 16:24:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.261 16:24:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.261 16:24:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.261 16:24:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.261 16:24:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.261 16:24:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.261 16:24:18 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:20.261 16:24:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.261 16:24:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.261 16:24:18 -- common/autotest_common.sh@10 -- # set +x 00:25:20.261 16:24:18 -- nvmf/common.sh@469 -- # nvmfpid=3187377 00:25:20.261 16:24:18 -- nvmf/common.sh@470 -- # waitforlisten 3187377 00:25:20.261 16:24:18 -- common/autotest_common.sh@819 -- # '[' -z 3187377 ']' 00:25:20.261 16:24:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:20.261 16:24:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.261 16:24:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.261 16:24:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.261 16:24:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.261 16:24:18 -- common/autotest_common.sh@10 -- # set +x 00:25:20.261 [2024-04-23 16:24:18.379124] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:25:20.261 [2024-04-23 16:24:18.379235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.261 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.261 [2024-04-23 16:24:18.507381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.261 [2024-04-23 16:24:18.610899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.261 [2024-04-23 16:24:18.611071] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.261 [2024-04-23 16:24:18.611085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.261 [2024-04-23 16:24:18.611095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.261 [2024-04-23 16:24:18.611120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.261 16:24:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.261 16:24:19 -- common/autotest_common.sh@852 -- # return 0 00:25:20.261 16:24:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:20.261 16:24:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.261 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:20.261 16:24:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.261 16:24:19 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:20.261 16:24:19 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:20.261 16:24:19 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:20.261 16:24:19 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:20.261 16:24:19 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:20.261 16:24:19 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:20.261 16:24:19 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:20.261 16:24:19 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:20.520 [2024-04-23 16:24:19.219287] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.520 [2024-04-23 16:24:19.235258] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:20.520 [2024-04-23 16:24:19.235448] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.520 malloc0 00:25:20.520 16:24:19 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.520 16:24:19 -- fips/fips.sh@148 -- # bdevperf_pid=3187690 00:25:20.520 16:24:19 -- fips/fips.sh@149 -- # waitforlisten 3187690 /var/tmp/bdevperf.sock 00:25:20.520 16:24:19 -- common/autotest_common.sh@819 -- # '[' -z 3187690 ']' 00:25:20.520 16:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.520 16:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.520 16:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.520 16:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.520 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:20.520 16:24:19 -- fips/fips.sh@146 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:20.520 [2024-04-23 16:24:19.424355] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:25:20.520 [2024-04-23 16:24:19.424515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187690 ] 00:25:20.781 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.781 [2024-04-23 16:24:19.558334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.781 [2024-04-23 16:24:19.652556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.344 16:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:21.344 16:24:20 -- common/autotest_common.sh@852 -- # return 0 00:25:21.344 16:24:20 -- fips/fips.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:21.344 [2024-04-23 16:24:20.209750] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.602 TLSTESTn1 00:25:21.602 16:24:20 -- fips/fips.sh@155 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.602 Running I/O for 10 seconds... 00:25:31.640 00:25:31.640 Latency(us) 00:25:31.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.640 Verification LBA range: start 0x0 length 0x2000 00:25:31.640 TLSTESTn1 : 10.02 3373.39 13.18 0.00 0.00 37904.54 4363.32 72296.56 00:25:31.640 =================================================================================================================== 00:25:31.640 Total : 3373.39 13.18 0.00 0.00 37904.54 4363.32 72296.56 00:25:31.640 0 00:25:31.640 16:24:30 -- fips/fips.sh@1 -- # cleanup 00:25:31.640 16:24:30 -- fips/fips.sh@15 -- # process_shm --id 0 00:25:31.640 16:24:30 -- common/autotest_common.sh@796 -- # type=--id 00:25:31.640 16:24:30 -- common/autotest_common.sh@797 -- # id=0 00:25:31.640 16:24:30 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:25:31.640 16:24:30 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:31.640 16:24:30 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:25:31.640 16:24:30 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:25:31.640 16:24:30 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:25:31.640 16:24:30 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:31.640 nvmf_trace.0 00:25:31.640 16:24:30 -- common/autotest_common.sh@811 -- # return 0 00:25:31.640 16:24:30 -- fips/fips.sh@16 -- # killprocess 3187690 00:25:31.640 16:24:30 -- common/autotest_common.sh@926 -- # '[' -z 3187690 ']' 00:25:31.640 16:24:30 -- common/autotest_common.sh@930 -- # kill -0 3187690 00:25:31.640 16:24:30 -- common/autotest_common.sh@931 -- # uname 00:25:31.640 16:24:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:31.640 16:24:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3187690 00:25:31.640 16:24:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:31.640 16:24:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:31.640 16:24:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3187690' 00:25:31.640 killing process with pid 3187690 00:25:31.640 16:24:30 -- common/autotest_common.sh@945 -- # kill 3187690 00:25:31.640 Received shutdown signal, test time was about 10.000000 seconds 00:25:31.640 00:25:31.640 Latency(us) 00:25:31.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.640 =================================================================================================================== 00:25:31.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.640 16:24:30 -- common/autotest_common.sh@950 -- # wait 3187690 00:25:32.207 16:24:30 -- fips/fips.sh@17 -- # nvmftestfini 00:25:32.207 16:24:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:32.207 16:24:30 -- nvmf/common.sh@116 -- # sync 00:25:32.207 16:24:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:32.207 16:24:30 -- nvmf/common.sh@119 -- # set +e 00:25:32.207 16:24:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:32.207 16:24:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:32.207 rmmod nvme_tcp 00:25:32.207 rmmod nvme_fabrics 00:25:32.207 rmmod nvme_keyring 00:25:32.207 16:24:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:32.207 16:24:30 -- nvmf/common.sh@123 -- # set -e 00:25:32.207 16:24:30 -- nvmf/common.sh@124 -- # return 0 00:25:32.207 16:24:30 -- nvmf/common.sh@477 -- # '[' -n 3187377 ']' 00:25:32.207 16:24:30 -- nvmf/common.sh@478 -- # killprocess 3187377 00:25:32.207 16:24:30 -- common/autotest_common.sh@926 -- # '[' -z 3187377 ']' 00:25:32.207 16:24:30 -- common/autotest_common.sh@930 -- # kill -0 3187377 00:25:32.207 16:24:30 -- common/autotest_common.sh@931 -- # uname 00:25:32.207 16:24:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:32.207 16:24:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3187377 00:25:32.207 16:24:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:32.207 16:24:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:32.207 16:24:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3187377' 00:25:32.207 killing process with pid 3187377 00:25:32.207 16:24:31 -- common/autotest_common.sh@945 -- # kill 3187377 00:25:32.207 16:24:31 -- common/autotest_common.sh@950 -- # wait 3187377 00:25:32.774 16:24:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:32.774 16:24:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:32.774 16:24:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:32.774 16:24:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.774 16:24:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:32.774 16:24:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.774 16:24:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.774 16:24:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.679 16:24:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:34.679 16:24:33 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:34.679 00:25:34.679 real 0m21.229s 00:25:34.679 user 0m21.838s 00:25:34.679 sys 0m9.814s 00:25:34.679 16:24:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.679 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:25:34.679 ************************************ 00:25:34.679 END TEST nvmf_fips 00:25:34.679 ************************************ 00:25:34.679 16:24:33 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:25:34.679 16:24:33 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:34.679 16:24:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:34.679 16:24:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.679 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:25:34.679 ************************************ 00:25:34.679 START TEST nvmf_fuzz 00:25:34.679 ************************************ 00:25:34.679 16:24:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:34.938 * Looking for test storage... 00:25:34.938 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:25:34.938 16:24:33 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.938 16:24:33 -- nvmf/common.sh@7 -- # uname -s 00:25:34.938 16:24:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.938 16:24:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.938 16:24:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.938 16:24:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.938 16:24:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.938 16:24:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.938 16:24:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.938 16:24:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.938 16:24:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.938 16:24:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.938 16:24:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:34.938 16:24:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:34.938 16:24:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.938 16:24:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.938 16:24:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:34.938 16:24:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:34.938 16:24:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.938 16:24:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.938 16:24:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.938 16:24:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.938 16:24:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.938 16:24:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.938 16:24:33 -- paths/export.sh@5 -- # export PATH 00:25:34.938 16:24:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.938 16:24:33 -- nvmf/common.sh@46 -- # : 0 00:25:34.938 16:24:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:34.938 16:24:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:34.938 16:24:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:34.938 16:24:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.938 16:24:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.938 16:24:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:34.938 16:24:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:34.938 16:24:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:34.938 16:24:33 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:34.938 16:24:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:34.938 16:24:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.938 16:24:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:34.938 16:24:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:34.938 16:24:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:34.938 16:24:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.938 16:24:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.938 16:24:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.938 16:24:33 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:25:34.938 16:24:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:34.938 16:24:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:34.938 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:25:40.243 16:24:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.243 16:24:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:40.243 16:24:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:40.243 16:24:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:40.243 16:24:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:40.243 16:24:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:40.243 16:24:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:40.243 16:24:38 -- nvmf/common.sh@294 -- # net_devs=() 00:25:40.243 16:24:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:40.243 16:24:38 -- nvmf/common.sh@295 -- # e810=() 00:25:40.243 16:24:38 -- nvmf/common.sh@295 -- # local -ga e810 00:25:40.243 16:24:38 -- nvmf/common.sh@296 -- # x722=() 00:25:40.243 16:24:38 -- nvmf/common.sh@296 -- # local -ga x722 00:25:40.243 16:24:38 -- nvmf/common.sh@297 -- # mlx=() 00:25:40.243 16:24:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:40.243 16:24:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.243 16:24:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:40.243 16:24:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.243 16:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:40.243 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:40.243 16:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.243 16:24:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:40.243 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:40.243 16:24:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.243 16:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.243 16:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.243 16:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:40.243 Found net devices under 0000:27:00.0: cvl_0_0 00:25:40.243 16:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.243 16:24:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.243 16:24:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.243 16:24:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.243 16:24:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:40.243 Found net devices under 0000:27:00.1: cvl_0_1 00:25:40.243 16:24:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.243 16:24:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:40.243 16:24:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:40.243 16:24:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:40.243 16:24:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.243 16:24:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.243 16:24:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.243 16:24:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:40.243 16:24:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.243 16:24:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.243 16:24:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:40.243 16:24:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.243 16:24:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.243 16:24:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:40.243 16:24:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:40.243 16:24:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.243 16:24:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.243 16:24:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.243 16:24:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.243 16:24:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:40.243 16:24:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.243 16:24:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.243 16:24:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.243 16:24:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:40.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:25:40.243 00:25:40.243 --- 10.0.0.2 ping statistics --- 00:25:40.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.243 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:25:40.243 16:24:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:25:40.243 00:25:40.243 --- 10.0.0.1 ping statistics --- 00:25:40.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.243 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:25:40.243 16:24:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.243 16:24:39 -- nvmf/common.sh@410 -- # return 0 00:25:40.243 16:24:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:40.243 16:24:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.243 16:24:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:40.243 16:24:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:40.243 16:24:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.243 16:24:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:40.243 16:24:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:40.243 16:24:39 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3193963 00:25:40.243 16:24:39 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:40.243 16:24:39 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:40.243 16:24:39 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3193963 00:25:40.243 16:24:39 -- common/autotest_common.sh@819 -- # '[' -z 3193963 ']' 00:25:40.243 16:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.243 16:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.243 16:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.243 16:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.243 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 16:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.180 16:24:39 -- common/autotest_common.sh@852 -- # return 0 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.180 16:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.180 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 16:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:41.180 16:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.180 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 Malloc0 00:25:41.180 16:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.180 16:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.180 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 16:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.180 16:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.180 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 16:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.180 16:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.180 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.180 16:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:41.180 16:24:39 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:13.273 Fuzzing completed. Shutting down the fuzz application 00:26:13.273 00:26:13.273 Dumping successful admin opcodes: 00:26:13.273 8, 9, 10, 24, 00:26:13.273 Dumping successful io opcodes: 00:26:13.273 0, 9, 00:26:13.273 NS: 0x200003aefec0 I/O qp, Total commands completed: 839367, total successful commands: 4876, random_seed: 3473931776 00:26:13.273 NS: 0x200003aefec0 admin qp, Total commands completed: 87336, total successful commands: 697, random_seed: 2288655552 00:26:13.273 16:25:10 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:13.273 Fuzzing completed. Shutting down the fuzz application 00:26:13.273 00:26:13.273 Dumping successful admin opcodes: 00:26:13.273 24, 00:26:13.273 Dumping successful io opcodes: 00:26:13.273 00:26:13.273 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 692302701 00:26:13.273 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 692393743 00:26:13.273 16:25:11 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.273 16:25:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.273 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:26:13.273 16:25:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.273 16:25:11 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:13.273 16:25:11 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:13.273 16:25:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:13.273 16:25:11 -- nvmf/common.sh@116 -- # sync 00:26:13.273 16:25:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.273 16:25:11 -- nvmf/common.sh@119 -- # set +e 00:26:13.273 16:25:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.273 16:25:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.273 rmmod nvme_tcp 00:26:13.273 rmmod nvme_fabrics 00:26:13.273 rmmod nvme_keyring 00:26:13.273 16:25:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.273 16:25:12 -- nvmf/common.sh@123 -- # set -e 00:26:13.273 16:25:12 -- nvmf/common.sh@124 -- # return 0 00:26:13.273 16:25:12 -- nvmf/common.sh@477 -- # '[' -n 3193963 ']' 00:26:13.273 16:25:12 -- nvmf/common.sh@478 -- # killprocess 3193963 00:26:13.273 16:25:12 -- common/autotest_common.sh@926 -- # '[' -z 3193963 ']' 00:26:13.273 16:25:12 -- common/autotest_common.sh@930 -- # kill -0 3193963 00:26:13.273 16:25:12 -- common/autotest_common.sh@931 -- # uname 00:26:13.273 16:25:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.273 16:25:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3193963 00:26:13.273 16:25:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.273 16:25:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.273 16:25:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3193963' 00:26:13.273 killing process with pid 3193963 00:26:13.273 16:25:12 -- common/autotest_common.sh@945 -- # kill 3193963 00:26:13.273 16:25:12 -- common/autotest_common.sh@950 -- # wait 3193963 00:26:13.845 16:25:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.846 16:25:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.846 16:25:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.846 16:25:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.846 16:25:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.846 16:25:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.846 16:25:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.846 16:25:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.769 16:25:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:15.769 16:25:14 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:15.769 00:26:15.769 real 0m41.043s 00:26:15.769 user 0m57.740s 00:26:15.769 sys 0m12.532s 00:26:15.769 16:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.769 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:26:15.769 ************************************ 00:26:15.769 END TEST nvmf_fuzz 00:26:15.769 ************************************ 00:26:15.769 16:25:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:15.769 16:25:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:15.769 16:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:15.769 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:26:15.769 ************************************ 00:26:15.769 START TEST nvmf_multiconnection 00:26:15.769 ************************************ 00:26:15.769 16:25:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:16.030 * Looking for test storage... 00:26:16.030 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:16.030 16:25:14 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.030 16:25:14 -- nvmf/common.sh@7 -- # uname -s 00:26:16.030 16:25:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.030 16:25:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.030 16:25:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.030 16:25:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.030 16:25:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.030 16:25:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.030 16:25:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.030 16:25:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.030 16:25:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.030 16:25:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.030 16:25:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:16.030 16:25:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:16.030 16:25:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.030 16:25:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.030 16:25:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:16.030 16:25:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:16.030 16:25:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.030 16:25:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.030 16:25:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.030 16:25:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.030 16:25:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.030 16:25:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.030 16:25:14 -- paths/export.sh@5 -- # export PATH 00:26:16.030 16:25:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.030 16:25:14 -- nvmf/common.sh@46 -- # : 0 00:26:16.030 16:25:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:16.030 16:25:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:16.030 16:25:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:16.030 16:25:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.030 16:25:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.030 16:25:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:16.031 16:25:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:16.031 16:25:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:16.031 16:25:14 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:16.031 16:25:14 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:16.031 16:25:14 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:16.031 16:25:14 -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:16.031 16:25:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:16.031 16:25:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.031 16:25:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:16.031 16:25:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:16.031 16:25:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:16.031 16:25:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.031 16:25:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.031 16:25:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.031 16:25:14 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:26:16.031 16:25:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:16.031 16:25:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:16.031 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:26:21.306 16:25:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:21.306 16:25:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:21.306 16:25:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:21.306 16:25:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:21.306 16:25:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:21.306 16:25:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:21.306 16:25:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:21.306 16:25:19 -- nvmf/common.sh@294 -- # net_devs=() 00:26:21.306 16:25:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:21.306 16:25:19 -- nvmf/common.sh@295 -- # e810=() 00:26:21.306 16:25:19 -- nvmf/common.sh@295 -- # local -ga e810 00:26:21.306 16:25:19 -- nvmf/common.sh@296 -- # x722=() 00:26:21.306 16:25:19 -- nvmf/common.sh@296 -- # local -ga x722 00:26:21.306 16:25:19 -- nvmf/common.sh@297 -- # mlx=() 00:26:21.306 16:25:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:21.306 16:25:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.306 16:25:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:21.306 16:25:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:21.306 16:25:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.306 16:25:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:21.306 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:21.306 16:25:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:21.306 16:25:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:21.306 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:21.306 16:25:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:21.306 16:25:19 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.306 16:25:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.306 16:25:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.306 16:25:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.306 16:25:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:21.306 Found net devices under 0000:27:00.0: cvl_0_0 00:26:21.306 16:25:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.306 16:25:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:21.306 16:25:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.306 16:25:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:21.306 16:25:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.306 16:25:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:21.306 Found net devices under 0000:27:00.1: cvl_0_1 00:26:21.306 16:25:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.306 16:25:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:21.306 16:25:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:21.306 16:25:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:21.306 16:25:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:21.306 16:25:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.306 16:25:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.306 16:25:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.307 16:25:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:21.307 16:25:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.307 16:25:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.307 16:25:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:21.307 16:25:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.307 16:25:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.307 16:25:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:21.307 16:25:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:21.307 16:25:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.307 16:25:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.307 16:25:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.307 16:25:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.307 16:25:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:21.307 16:25:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.307 16:25:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.307 16:25:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.307 16:25:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:21.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:26:21.307 00:26:21.307 --- 10.0.0.2 ping statistics --- 00:26:21.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.307 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:21.307 16:25:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:26:21.307 00:26:21.307 --- 10.0.0.1 ping statistics --- 00:26:21.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.307 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:26:21.307 16:25:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.307 16:25:19 -- nvmf/common.sh@410 -- # return 0 00:26:21.307 16:25:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:21.307 16:25:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.307 16:25:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:21.307 16:25:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:21.307 16:25:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.307 16:25:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:21.307 16:25:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:21.307 16:25:19 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:21.307 16:25:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:21.307 16:25:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:21.307 16:25:19 -- common/autotest_common.sh@10 -- # set +x 00:26:21.307 16:25:19 -- nvmf/common.sh@469 -- # nvmfpid=3204118 00:26:21.307 16:25:19 -- nvmf/common.sh@470 -- # waitforlisten 3204118 00:26:21.307 16:25:19 -- common/autotest_common.sh@819 -- # '[' -z 3204118 ']' 00:26:21.307 16:25:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.307 16:25:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.307 16:25:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.307 16:25:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.307 16:25:19 -- common/autotest_common.sh@10 -- # set +x 00:26:21.307 16:25:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:21.307 [2024-04-23 16:25:19.992401] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:26:21.307 [2024-04-23 16:25:19.992500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.307 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.307 [2024-04-23 16:25:20.117264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.307 [2024-04-23 16:25:20.211711] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:21.307 [2024-04-23 16:25:20.211879] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.307 [2024-04-23 16:25:20.211893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.307 [2024-04-23 16:25:20.211902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.307 [2024-04-23 16:25:20.211966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.307 [2024-04-23 16:25:20.212068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.307 [2024-04-23 16:25:20.212096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.307 [2024-04-23 16:25:20.212107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.876 16:25:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:21.876 16:25:20 -- common/autotest_common.sh@852 -- # return 0 00:26:21.876 16:25:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:21.876 16:25:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:21.876 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:21.876 16:25:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.876 16:25:20 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.876 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.876 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:21.876 [2024-04-23 16:25:20.749887] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.876 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.876 16:25:20 -- target/multiconnection.sh@21 -- # seq 1 11 00:26:21.876 16:25:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.876 16:25:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:21.876 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.876 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:21.876 Malloc1 00:26:21.876 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.876 16:25:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:21.876 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.876 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 [2024-04-23 16:25:20.827761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.138 16:25:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 Malloc2 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.138 16:25:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 Malloc3 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.138 16:25:20 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 Malloc4 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:20 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:22.138 16:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.138 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:22.138 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 Malloc5 00:26:22.138 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:22.138 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:22.138 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:22.138 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.138 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.138 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.138 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:22.138 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.138 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 Malloc6 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.398 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 Malloc7 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.398 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 Malloc8 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.398 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 Malloc9 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.398 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:22.398 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.398 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.398 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.399 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:22.399 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.399 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.399 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.399 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.399 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:22.399 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.399 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.657 Malloc10 00:26:22.657 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.657 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:22.657 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.657 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.657 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.657 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:22.657 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.657 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.657 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.657 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:22.657 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.657 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.657 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.658 16:25:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.658 16:25:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:22.658 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.658 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.658 Malloc11 00:26:22.658 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.658 16:25:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:22.658 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.658 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.658 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.658 16:25:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:22.658 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.658 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.658 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.658 16:25:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:22.658 16:25:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.658 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:22.658 16:25:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.658 16:25:21 -- target/multiconnection.sh@28 -- # seq 1 11 00:26:22.658 16:25:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.658 16:25:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:24.038 16:25:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:24.038 16:25:22 -- common/autotest_common.sh@1177 -- # local i=0 00:26:24.038 16:25:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.038 16:25:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:24.038 16:25:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:26.574 16:25:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:26.574 16:25:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:26.574 16:25:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:26:26.574 16:25:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:26.574 16:25:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.574 16:25:24 -- common/autotest_common.sh@1187 -- # return 0 00:26:26.574 16:25:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.574 16:25:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:27.511 16:25:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:27.511 16:25:26 -- common/autotest_common.sh@1177 -- # local i=0 00:26:27.511 16:25:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.511 16:25:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:27.511 16:25:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:30.051 16:25:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:30.051 16:25:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:30.051 16:25:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:26:30.051 16:25:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:30.051 16:25:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.051 16:25:28 -- common/autotest_common.sh@1187 -- # return 0 00:26:30.051 16:25:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.051 16:25:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:30.990 16:25:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:30.990 16:25:29 -- common/autotest_common.sh@1177 -- # local i=0 00:26:30.990 16:25:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.990 16:25:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:30.990 16:25:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:33.527 16:25:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:33.527 16:25:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:33.527 16:25:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:26:33.527 16:25:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:33.527 16:25:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.527 16:25:31 -- common/autotest_common.sh@1187 -- # return 0 00:26:33.527 16:25:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.527 16:25:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:34.906 16:25:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:34.906 16:25:33 -- common/autotest_common.sh@1177 -- # local i=0 00:26:34.906 16:25:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:34.906 16:25:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:34.906 16:25:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:36.928 16:25:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:36.928 16:25:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:36.928 16:25:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:26:36.928 16:25:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:36.928 16:25:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.928 16:25:35 -- common/autotest_common.sh@1187 -- # return 0 00:26:36.928 16:25:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.928 16:25:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:38.837 16:25:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:38.837 16:25:37 -- common/autotest_common.sh@1177 -- # local i=0 00:26:38.837 16:25:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.837 16:25:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:38.837 16:25:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:40.741 16:25:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:40.741 16:25:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:40.741 16:25:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:26:40.741 16:25:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:40.741 16:25:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.741 16:25:39 -- common/autotest_common.sh@1187 -- # return 0 00:26:40.741 16:25:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.741 16:25:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:42.118 16:25:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:42.118 16:25:40 -- common/autotest_common.sh@1177 -- # local i=0 00:26:42.119 16:25:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.119 16:25:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:42.119 16:25:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:44.652 16:25:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:44.652 16:25:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:44.652 16:25:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:26:44.652 16:25:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:44.652 16:25:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.652 16:25:42 -- common/autotest_common.sh@1187 -- # return 0 00:26:44.652 16:25:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.652 16:25:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:46.027 16:25:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:46.027 16:25:44 -- common/autotest_common.sh@1177 -- # local i=0 00:26:46.027 16:25:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.027 16:25:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:46.027 16:25:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:47.938 16:25:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:47.938 16:25:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:47.938 16:25:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:26:47.938 16:25:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:47.938 16:25:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.938 16:25:46 -- common/autotest_common.sh@1187 -- # return 0 00:26:47.938 16:25:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.938 16:25:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:49.845 16:25:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:49.845 16:25:48 -- common/autotest_common.sh@1177 -- # local i=0 00:26:49.845 16:25:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.845 16:25:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:49.845 16:25:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:51.750 16:25:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:51.750 16:25:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:51.750 16:25:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:26:51.750 16:25:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:51.750 16:25:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.750 16:25:50 -- common/autotest_common.sh@1187 -- # return 0 00:26:51.750 16:25:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.750 16:25:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:53.658 16:25:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:53.658 16:25:52 -- common/autotest_common.sh@1177 -- # local i=0 00:26:53.658 16:25:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.658 16:25:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:53.658 16:25:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:55.567 16:25:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:55.567 16:25:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:55.567 16:25:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:26:55.567 16:25:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:55.567 16:25:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.567 16:25:54 -- common/autotest_common.sh@1187 -- # return 0 00:26:55.567 16:25:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.567 16:25:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:57.473 16:25:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:57.473 16:25:56 -- common/autotest_common.sh@1177 -- # local i=0 00:26:57.473 16:25:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.473 16:25:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:26:57.473 16:25:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:26:59.382 16:25:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:26:59.382 16:25:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:26:59.382 16:25:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:26:59.382 16:25:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:26:59.382 16:25:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.382 16:25:58 -- common/autotest_common.sh@1187 -- # return 0 00:26:59.382 16:25:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.382 16:25:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:01.324 16:26:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:01.324 16:26:00 -- common/autotest_common.sh@1177 -- # local i=0 00:27:01.324 16:26:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.324 16:26:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:01.324 16:26:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:03.226 16:26:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:03.226 16:26:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:03.226 16:26:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:27:03.226 16:26:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:03.226 16:26:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.226 16:26:02 -- common/autotest_common.sh@1187 -- # return 0 00:27:03.226 16:26:02 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:03.226 [global] 00:27:03.226 thread=1 00:27:03.226 invalidate=1 00:27:03.226 rw=read 00:27:03.226 time_based=1 00:27:03.226 runtime=10 00:27:03.226 ioengine=libaio 00:27:03.226 direct=1 00:27:03.226 bs=262144 00:27:03.226 iodepth=64 00:27:03.226 norandommap=1 00:27:03.226 numjobs=1 00:27:03.226 00:27:03.226 [job0] 00:27:03.226 filename=/dev/nvme0n1 00:27:03.226 [job1] 00:27:03.226 filename=/dev/nvme10n1 00:27:03.226 [job2] 00:27:03.226 filename=/dev/nvme1n1 00:27:03.226 [job3] 00:27:03.226 filename=/dev/nvme2n1 00:27:03.226 [job4] 00:27:03.226 filename=/dev/nvme3n1 00:27:03.226 [job5] 00:27:03.226 filename=/dev/nvme4n1 00:27:03.226 [job6] 00:27:03.226 filename=/dev/nvme5n1 00:27:03.226 [job7] 00:27:03.226 filename=/dev/nvme6n1 00:27:03.226 [job8] 00:27:03.226 filename=/dev/nvme7n1 00:27:03.226 [job9] 00:27:03.226 filename=/dev/nvme8n1 00:27:03.226 [job10] 00:27:03.226 filename=/dev/nvme9n1 00:27:03.484 Could not set queue depth (nvme0n1) 00:27:03.484 Could not set queue depth (nvme10n1) 00:27:03.484 Could not set queue depth (nvme1n1) 00:27:03.484 Could not set queue depth (nvme2n1) 00:27:03.484 Could not set queue depth (nvme3n1) 00:27:03.484 Could not set queue depth (nvme4n1) 00:27:03.484 Could not set queue depth (nvme5n1) 00:27:03.484 Could not set queue depth (nvme6n1) 00:27:03.484 Could not set queue depth (nvme7n1) 00:27:03.484 Could not set queue depth (nvme8n1) 00:27:03.484 Could not set queue depth (nvme9n1) 00:27:03.743 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.743 fio-3.35 00:27:03.743 Starting 11 threads 00:27:15.949 00:27:15.949 job0: (groupid=0, jobs=1): err= 0: pid=3212795: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=966, BW=242MiB/s (253MB/s)(2425MiB/10035msec) 00:27:15.949 slat (usec): min=9, max=36215, avg=1005.00, stdev=2762.73 00:27:15.949 clat (msec): min=4, max=151, avg=65.14, stdev=33.19 00:27:15.949 lat (msec): min=4, max=161, avg=66.15, stdev=33.72 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 37], 00:27:15.949 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 43], 60.00th=[ 73], 00:27:15.949 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 116], 95.00th=[ 122], 00:27:15.949 | 99.00th=[ 134], 99.50th=[ 138], 99.90th=[ 150], 99.95th=[ 153], 00:27:15.949 | 99.99th=[ 153] 00:27:15.949 bw ( KiB/s): min=128512, max=450560, per=11.94%, avg=246681.60, stdev=125118.43, samples=20 00:27:15.949 iops : min= 502, max= 1760, avg=963.60, stdev=488.74, samples=20 00:27:15.949 lat (msec) : 10=0.08%, 20=0.26%, 50=53.01%, 100=25.23%, 250=21.42% 00:27:15.949 cpu : usr=0.12%, sys=1.97%, ctx=2047, majf=0, minf=4097 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=9699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job1: (groupid=0, jobs=1): err= 0: pid=3212811: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=650, BW=163MiB/s (171MB/s)(1635MiB/10054msec) 00:27:15.949 slat (usec): min=8, max=80178, avg=1461.38, stdev=3749.12 00:27:15.949 clat (msec): min=4, max=219, avg=96.83, stdev=29.85 00:27:15.949 lat (msec): min=4, max=226, avg=98.29, stdev=30.34 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 17], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 66], 00:27:15.949 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 102], 60.00th=[ 110], 00:27:15.949 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 140], 00:27:15.949 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 218], 00:27:15.949 | 99.99th=[ 220] 00:27:15.949 bw ( KiB/s): min=115200, max=277504, per=8.02%, avg=165811.20, stdev=43741.92, samples=20 00:27:15.949 iops : min= 450, max= 1084, avg=647.70, stdev=170.87, samples=20 00:27:15.949 lat (msec) : 10=0.43%, 20=1.13%, 50=2.68%, 100=44.89%, 250=50.87% 00:27:15.949 cpu : usr=0.11%, sys=1.60%, ctx=1523, majf=0, minf=4097 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=6540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job2: (groupid=0, jobs=1): err= 0: pid=3212834: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=740, BW=185MiB/s (194MB/s)(1875MiB/10128msec) 00:27:15.949 slat (usec): min=8, max=116818, avg=773.59, stdev=3558.73 00:27:15.949 clat (msec): min=3, max=211, avg=85.57, stdev=41.48 00:27:15.949 lat (msec): min=3, max=287, avg=86.34, stdev=41.97 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 43], 00:27:15.949 | 30.00th=[ 59], 40.00th=[ 78], 50.00th=[ 92], 60.00th=[ 108], 00:27:15.949 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 130], 95.00th=[ 146], 00:27:15.949 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 205], 99.95th=[ 209], 00:27:15.949 | 99.99th=[ 211] 00:27:15.949 bw ( KiB/s): min=115712, max=276480, per=9.21%, avg=190387.20, stdev=47897.04, samples=20 00:27:15.949 iops : min= 452, max= 1080, avg=743.70, stdev=187.10, samples=20 00:27:15.949 lat (msec) : 4=0.08%, 10=1.72%, 20=5.20%, 50=17.66%, 100=29.89% 00:27:15.949 lat (msec) : 250=45.45% 00:27:15.949 cpu : usr=0.14%, sys=1.91%, ctx=1858, majf=0, minf=4097 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=7501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job3: (groupid=0, jobs=1): err= 0: pid=3212848: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=675, BW=169MiB/s (177MB/s)(1697MiB/10049msec) 00:27:15.949 slat (usec): min=6, max=119238, avg=1154.74, stdev=3979.98 00:27:15.949 clat (msec): min=17, max=294, avg=93.51, stdev=29.96 00:27:15.949 lat (msec): min=17, max=294, avg=94.67, stdev=30.34 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 42], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 63], 00:27:15.949 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 103], 00:27:15.949 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 125], 95.00th=[ 140], 00:27:15.949 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 203], 99.95th=[ 207], 00:27:15.949 | 99.99th=[ 296] 00:27:15.949 bw ( KiB/s): min=96256, max=290816, per=8.33%, avg=172185.60, stdev=45460.87, samples=20 00:27:15.949 iops : min= 376, max= 1136, avg=672.60, stdev=177.58, samples=20 00:27:15.949 lat (msec) : 20=0.18%, 50=5.32%, 100=51.22%, 250=43.28%, 500=0.01% 00:27:15.949 cpu : usr=0.15%, sys=1.82%, ctx=1629, majf=0, minf=4097 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=6789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job4: (groupid=0, jobs=1): err= 0: pid=3212856: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=799, BW=200MiB/s (210MB/s)(2015MiB/10073msec) 00:27:15.949 slat (usec): min=9, max=49148, avg=1229.06, stdev=3323.91 00:27:15.949 clat (msec): min=35, max=179, avg=78.70, stdev=21.64 00:27:15.949 lat (msec): min=35, max=179, avg=79.93, stdev=21.96 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 62], 00:27:15.949 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 80], 00:27:15.949 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 127], 00:27:15.949 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 169], 00:27:15.949 | 99.99th=[ 180] 00:27:15.949 bw ( KiB/s): min=130560, max=276992, per=9.90%, avg=204672.00, stdev=43855.19, samples=20 00:27:15.949 iops : min= 510, max= 1082, avg=799.50, stdev=171.31, samples=20 00:27:15.949 lat (msec) : 50=3.55%, 100=80.40%, 250=16.05% 00:27:15.949 cpu : usr=0.08%, sys=1.82%, ctx=1684, majf=0, minf=3598 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=8058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job5: (groupid=0, jobs=1): err= 0: pid=3212886: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=1063, BW=266MiB/s (279MB/s)(2683MiB/10093msec) 00:27:15.949 slat (usec): min=8, max=69534, avg=919.90, stdev=2645.57 00:27:15.949 clat (msec): min=20, max=194, avg=59.23, stdev=30.55 00:27:15.949 lat (msec): min=20, max=194, avg=60.15, stdev=31.02 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:27:15.949 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 56], 00:27:15.949 | 70.00th=[ 67], 80.00th=[ 83], 90.00th=[ 114], 95.00th=[ 125], 00:27:15.949 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 178], 00:27:15.949 | 99.99th=[ 194] 00:27:15.949 bw ( KiB/s): min=106196, max=447488, per=13.21%, avg=273060.20, stdev=119312.37, samples=20 00:27:15.949 iops : min= 414, max= 1748, avg=1066.60, stdev=466.12, samples=20 00:27:15.949 lat (msec) : 50=55.02%, 100=31.03%, 250=13.95% 00:27:15.949 cpu : usr=0.13%, sys=2.23%, ctx=2162, majf=0, minf=4097 00:27:15.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:27:15.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.949 issued rwts: total=10730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.949 job6: (groupid=0, jobs=1): err= 0: pid=3212901: Tue Apr 23 16:26:13 2024 00:27:15.949 read: IOPS=627, BW=157MiB/s (164MB/s)(1582MiB/10087msec) 00:27:15.949 slat (usec): min=10, max=115427, avg=1498.20, stdev=4167.09 00:27:15.949 clat (msec): min=2, max=225, avg=100.47, stdev=35.26 00:27:15.949 lat (msec): min=2, max=233, avg=101.97, stdev=35.84 00:27:15.949 clat percentiles (msec): 00:27:15.949 | 1.00th=[ 9], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 78], 00:27:15.949 | 30.00th=[ 88], 40.00th=[ 100], 50.00th=[ 108], 60.00th=[ 113], 00:27:15.949 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 133], 95.00th=[ 161], 00:27:15.950 | 99.00th=[ 190], 99.50th=[ 207], 99.90th=[ 211], 99.95th=[ 218], 00:27:15.950 | 99.99th=[ 226] 00:27:15.950 bw ( KiB/s): min=83968, max=353792, per=7.76%, avg=160307.20, stdev=54670.40, samples=20 00:27:15.950 iops : min= 328, max= 1382, avg=626.20, stdev=213.56, samples=20 00:27:15.950 lat (msec) : 4=0.32%, 10=0.82%, 20=0.36%, 50=10.50%, 100=29.62% 00:27:15.950 lat (msec) : 250=58.38% 00:27:15.950 cpu : usr=0.13%, sys=2.26%, ctx=1467, majf=0, minf=4097 00:27:15.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.950 issued rwts: total=6326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.950 job7: (groupid=0, jobs=1): err= 0: pid=3212913: Tue Apr 23 16:26:13 2024 00:27:15.950 read: IOPS=574, BW=144MiB/s (151MB/s)(1451MiB/10092msec) 00:27:15.950 slat (usec): min=6, max=79816, avg=1581.22, stdev=4787.59 00:27:15.950 clat (usec): min=1576, max=226214, avg=109632.18, stdev=35011.46 00:27:15.950 lat (usec): min=1621, max=264253, avg=111213.40, stdev=35694.17 00:27:15.950 clat percentiles (msec): 00:27:15.950 | 1.00th=[ 6], 5.00th=[ 30], 10.00th=[ 63], 20.00th=[ 97], 00:27:15.950 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 114], 60.00th=[ 117], 00:27:15.950 | 70.00th=[ 123], 80.00th=[ 129], 90.00th=[ 146], 95.00th=[ 163], 00:27:15.950 | 99.00th=[ 188], 99.50th=[ 203], 99.90th=[ 213], 99.95th=[ 220], 00:27:15.950 | 99.99th=[ 226] 00:27:15.950 bw ( KiB/s): min=88064, max=264192, per=7.11%, avg=146892.80, stdev=36723.34, samples=20 00:27:15.950 iops : min= 344, max= 1032, avg=573.80, stdev=143.45, samples=20 00:27:15.950 lat (msec) : 2=0.02%, 4=0.79%, 10=0.98%, 20=1.28%, 50=6.10% 00:27:15.950 lat (msec) : 100=14.46%, 250=76.37% 00:27:15.950 cpu : usr=0.09%, sys=1.82%, ctx=1434, majf=0, minf=4097 00:27:15.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.950 issued rwts: total=5802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.950 job8: (groupid=0, jobs=1): err= 0: pid=3212948: Tue Apr 23 16:26:13 2024 00:27:15.950 read: IOPS=582, BW=146MiB/s (153MB/s)(1466MiB/10075msec) 00:27:15.950 slat (usec): min=11, max=68420, avg=1657.07, stdev=4198.98 00:27:15.950 clat (msec): min=45, max=244, avg=108.19, stdev=27.12 00:27:15.950 lat (msec): min=45, max=252, avg=109.85, stdev=27.51 00:27:15.950 clat percentiles (msec): 00:27:15.950 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 75], 20.00th=[ 88], 00:27:15.950 | 30.00th=[ 94], 40.00th=[ 102], 50.00th=[ 108], 60.00th=[ 114], 00:27:15.950 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 138], 95.00th=[ 159], 00:27:15.950 | 99.00th=[ 186], 99.50th=[ 199], 99.90th=[ 220], 99.95th=[ 245], 00:27:15.950 | 99.99th=[ 245] 00:27:15.950 bw ( KiB/s): min=84992, max=205312, per=7.19%, avg=148505.60, stdev=26473.02, samples=20 00:27:15.950 iops : min= 332, max= 802, avg=580.10, stdev=103.41, samples=20 00:27:15.950 lat (msec) : 50=0.51%, 100=37.92%, 250=61.57% 00:27:15.950 cpu : usr=0.14%, sys=1.48%, ctx=1333, majf=0, minf=4097 00:27:15.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.950 issued rwts: total=5865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.950 job9: (groupid=0, jobs=1): err= 0: pid=3212957: Tue Apr 23 16:26:13 2024 00:27:15.950 read: IOPS=827, BW=207MiB/s (217MB/s)(2084MiB/10072msec) 00:27:15.950 slat (usec): min=10, max=45545, avg=1181.47, stdev=3081.66 00:27:15.950 clat (msec): min=13, max=182, avg=76.08, stdev=23.72 00:27:15.950 lat (msec): min=14, max=182, avg=77.26, stdev=24.09 00:27:15.950 clat percentiles (msec): 00:27:15.950 | 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 57], 00:27:15.950 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 75], 00:27:15.950 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 114], 95.00th=[ 129], 00:27:15.950 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 163], 00:27:15.950 | 99.99th=[ 184] 00:27:15.950 bw ( KiB/s): min=122368, max=282624, per=10.24%, avg=211737.60, stdev=50968.67, samples=20 00:27:15.950 iops : min= 478, max= 1104, avg=827.10, stdev=199.10, samples=20 00:27:15.950 lat (msec) : 20=0.01%, 50=6.54%, 100=77.32%, 250=16.13% 00:27:15.950 cpu : usr=0.16%, sys=2.53%, ctx=1759, majf=0, minf=4097 00:27:15.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.950 issued rwts: total=8334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.950 job10: (groupid=0, jobs=1): err= 0: pid=3212959: Tue Apr 23 16:26:13 2024 00:27:15.950 read: IOPS=611, BW=153MiB/s (160MB/s)(1531MiB/10018msec) 00:27:15.950 slat (usec): min=10, max=82961, avg=1090.09, stdev=3982.24 00:27:15.950 clat (msec): min=5, max=240, avg=103.47, stdev=37.51 00:27:15.950 lat (msec): min=5, max=262, avg=104.56, stdev=38.22 00:27:15.950 clat percentiles (msec): 00:27:15.950 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 69], 00:27:15.950 | 30.00th=[ 90], 40.00th=[ 104], 50.00th=[ 113], 60.00th=[ 120], 00:27:15.950 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 140], 95.00th=[ 155], 00:27:15.950 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 215], 00:27:15.950 | 99.99th=[ 241] 00:27:15.950 bw ( KiB/s): min=85504, max=265216, per=7.51%, avg=155187.20, stdev=39270.21, samples=20 00:27:15.950 iops : min= 334, max= 1036, avg=606.20, stdev=153.40, samples=20 00:27:15.950 lat (msec) : 10=0.46%, 20=1.01%, 50=9.63%, 100=26.40%, 250=62.50% 00:27:15.950 cpu : usr=0.13%, sys=1.88%, ctx=1719, majf=0, minf=4097 00:27:15.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:15.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:15.950 issued rwts: total=6125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:15.950 00:27:15.950 Run status group 0 (all jobs): 00:27:15.950 READ: bw=2018MiB/s (2116MB/s), 144MiB/s-266MiB/s (151MB/s-279MB/s), io=20.0GiB (21.4GB), run=10018-10128msec 00:27:15.950 00:27:15.950 Disk stats (read/write): 00:27:15.950 nvme0n1: ios=18871/0, merge=0/0, ticks=1222923/0, in_queue=1222923, util=96.40% 00:27:15.950 nvme10n1: ios=12747/0, merge=0/0, ticks=1221312/0, in_queue=1221312, util=96.66% 00:27:15.950 nvme1n1: ios=15001/0, merge=0/0, ticks=1263244/0, in_queue=1263244, util=97.10% 00:27:15.950 nvme2n1: ios=13232/0, merge=0/0, ticks=1226608/0, in_queue=1226608, util=97.22% 00:27:15.950 nvme3n1: ios=15784/0, merge=0/0, ticks=1224627/0, in_queue=1224627, util=97.40% 00:27:15.950 nvme4n1: ios=21137/0, merge=0/0, ticks=1220327/0, in_queue=1220327, util=97.81% 00:27:15.950 nvme5n1: ios=12331/0, merge=0/0, ticks=1217437/0, in_queue=1217437, util=98.02% 00:27:15.950 nvme6n1: ios=11307/0, merge=0/0, ticks=1219465/0, in_queue=1219465, util=98.21% 00:27:15.950 nvme7n1: ios=11410/0, merge=0/0, ticks=1219349/0, in_queue=1219349, util=98.77% 00:27:15.950 nvme8n1: ios=16355/0, merge=0/0, ticks=1221990/0, in_queue=1221990, util=99.03% 00:27:15.950 nvme9n1: ios=11782/0, merge=0/0, ticks=1229462/0, in_queue=1229462, util=99.25% 00:27:15.950 16:26:13 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:15.950 [global] 00:27:15.950 thread=1 00:27:15.950 invalidate=1 00:27:15.950 rw=randwrite 00:27:15.950 time_based=1 00:27:15.950 runtime=10 00:27:15.950 ioengine=libaio 00:27:15.950 direct=1 00:27:15.950 bs=262144 00:27:15.950 iodepth=64 00:27:15.950 norandommap=1 00:27:15.950 numjobs=1 00:27:15.950 00:27:15.950 [job0] 00:27:15.950 filename=/dev/nvme0n1 00:27:15.950 [job1] 00:27:15.950 filename=/dev/nvme10n1 00:27:15.950 [job2] 00:27:15.950 filename=/dev/nvme1n1 00:27:15.950 [job3] 00:27:15.950 filename=/dev/nvme2n1 00:27:15.950 [job4] 00:27:15.950 filename=/dev/nvme3n1 00:27:15.950 [job5] 00:27:15.950 filename=/dev/nvme4n1 00:27:15.950 [job6] 00:27:15.950 filename=/dev/nvme5n1 00:27:15.950 [job7] 00:27:15.950 filename=/dev/nvme6n1 00:27:15.950 [job8] 00:27:15.950 filename=/dev/nvme7n1 00:27:15.950 [job9] 00:27:15.950 filename=/dev/nvme8n1 00:27:15.950 [job10] 00:27:15.950 filename=/dev/nvme9n1 00:27:15.950 Could not set queue depth (nvme0n1) 00:27:15.950 Could not set queue depth (nvme10n1) 00:27:15.950 Could not set queue depth (nvme1n1) 00:27:15.950 Could not set queue depth (nvme2n1) 00:27:15.950 Could not set queue depth (nvme3n1) 00:27:15.950 Could not set queue depth (nvme4n1) 00:27:15.950 Could not set queue depth (nvme5n1) 00:27:15.950 Could not set queue depth (nvme6n1) 00:27:15.950 Could not set queue depth (nvme7n1) 00:27:15.950 Could not set queue depth (nvme8n1) 00:27:15.950 Could not set queue depth (nvme9n1) 00:27:15.950 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.950 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.951 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.951 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.951 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.951 fio-3.35 00:27:15.951 Starting 11 threads 00:27:25.929 00:27:25.929 job0: (groupid=0, jobs=1): err= 0: pid=3214871: Tue Apr 23 16:26:24 2024 00:27:25.929 write: IOPS=429, BW=107MiB/s (113MB/s)(1090MiB/10154msec); 0 zone resets 00:27:25.929 slat (usec): min=17, max=107666, avg=2138.17, stdev=4402.99 00:27:25.929 clat (msec): min=7, max=297, avg=146.82, stdev=33.79 00:27:25.929 lat (msec): min=7, max=297, avg=148.96, stdev=34.16 00:27:25.929 clat percentiles (msec): 00:27:25.929 | 1.00th=[ 42], 5.00th=[ 99], 10.00th=[ 113], 20.00th=[ 122], 00:27:25.929 | 30.00th=[ 131], 40.00th=[ 138], 50.00th=[ 144], 60.00th=[ 157], 00:27:25.929 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 201], 00:27:25.929 | 99.00th=[ 228], 99.50th=[ 257], 99.90th=[ 292], 99.95th=[ 292], 00:27:25.929 | 99.99th=[ 300] 00:27:25.929 bw ( KiB/s): min=77824, max=148480, per=7.69%, avg=109977.60, stdev=21049.82, samples=20 00:27:25.929 iops : min= 304, max= 580, avg=429.60, stdev=82.23, samples=20 00:27:25.929 lat (msec) : 10=0.05%, 20=0.18%, 50=1.15%, 100=4.15%, 250=93.90% 00:27:25.929 lat (msec) : 500=0.57% 00:27:25.929 cpu : usr=1.66%, sys=1.34%, ctx=1460, majf=0, minf=1 00:27:25.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:25.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.929 issued rwts: total=0,4360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.929 job1: (groupid=0, jobs=1): err= 0: pid=3214883: Tue Apr 23 16:26:24 2024 00:27:25.929 write: IOPS=423, BW=106MiB/s (111MB/s)(1075MiB/10155msec); 0 zone resets 00:27:25.929 slat (usec): min=24, max=69634, avg=2099.89, stdev=4434.70 00:27:25.929 clat (msec): min=5, max=297, avg=149.03, stdev=39.51 00:27:25.929 lat (msec): min=5, max=297, avg=151.13, stdev=40.05 00:27:25.929 clat percentiles (msec): 00:27:25.929 | 1.00th=[ 20], 5.00th=[ 70], 10.00th=[ 106], 20.00th=[ 130], 00:27:25.929 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 150], 60.00th=[ 163], 00:27:25.929 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 207], 00:27:25.929 | 99.00th=[ 243], 99.50th=[ 257], 99.90th=[ 292], 99.95th=[ 292], 00:27:25.929 | 99.99th=[ 300] 00:27:25.929 bw ( KiB/s): min=71168, max=150016, per=7.58%, avg=108390.40, stdev=19899.89, samples=20 00:27:25.929 iops : min= 278, max= 586, avg=423.40, stdev=77.73, samples=20 00:27:25.929 lat (msec) : 10=0.14%, 20=0.93%, 50=1.42%, 100=6.79%, 250=90.07% 00:27:25.929 lat (msec) : 500=0.65% 00:27:25.929 cpu : usr=1.26%, sys=1.27%, ctx=1640, majf=0, minf=1 00:27:25.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:25.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.929 issued rwts: total=0,4298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.929 job2: (groupid=0, jobs=1): err= 0: pid=3214884: Tue Apr 23 16:26:24 2024 00:27:25.929 write: IOPS=463, BW=116MiB/s (121MB/s)(1178MiB/10167msec); 0 zone resets 00:27:25.929 slat (usec): min=22, max=56119, avg=1994.07, stdev=3962.25 00:27:25.929 clat (msec): min=3, max=345, avg=136.03, stdev=39.53 00:27:25.929 lat (msec): min=3, max=345, avg=138.03, stdev=39.99 00:27:25.929 clat percentiles (msec): 00:27:25.929 | 1.00th=[ 28], 5.00th=[ 78], 10.00th=[ 102], 20.00th=[ 108], 00:27:25.929 | 30.00th=[ 113], 40.00th=[ 126], 50.00th=[ 131], 60.00th=[ 146], 00:27:25.929 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 197], 00:27:25.929 | 99.00th=[ 239], 99.50th=[ 279], 99.90th=[ 334], 99.95th=[ 334], 00:27:25.929 | 99.99th=[ 347] 00:27:25.929 bw ( KiB/s): min=73728, max=173568, per=8.32%, avg=118976.05, stdev=26746.15, samples=20 00:27:25.929 iops : min= 288, max= 678, avg=464.75, stdev=104.38, samples=20 00:27:25.929 lat (msec) : 4=0.04%, 10=0.13%, 20=0.25%, 50=2.65%, 100=5.60% 00:27:25.929 lat (msec) : 250=90.60%, 500=0.72% 00:27:25.929 cpu : usr=1.52%, sys=1.30%, ctx=1555, majf=0, minf=1 00:27:25.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:25.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.929 issued rwts: total=0,4712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.929 job3: (groupid=0, jobs=1): err= 0: pid=3214885: Tue Apr 23 16:26:24 2024 00:27:25.929 write: IOPS=608, BW=152MiB/s (160MB/s)(1547MiB/10163msec); 0 zone resets 00:27:25.929 slat (usec): min=21, max=36394, avg=1444.31, stdev=2758.73 00:27:25.929 clat (msec): min=9, max=343, avg=103.64, stdev=32.60 00:27:25.929 lat (msec): min=10, max=343, avg=105.09, stdev=32.81 00:27:25.929 clat percentiles (msec): 00:27:25.929 | 1.00th=[ 25], 5.00th=[ 66], 10.00th=[ 84], 20.00th=[ 87], 00:27:25.929 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 100], 00:27:25.929 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 140], 95.00th=[ 176], 00:27:25.929 | 99.00th=[ 192], 99.50th=[ 255], 99.90th=[ 321], 99.95th=[ 334], 00:27:25.929 | 99.99th=[ 342] 00:27:25.929 bw ( KiB/s): min=88576, max=184320, per=10.96%, avg=156789.75, stdev=27814.20, samples=20 00:27:25.929 iops : min= 346, max= 720, avg=612.45, stdev=108.65, samples=20 00:27:25.929 lat (msec) : 10=0.02%, 20=0.50%, 50=2.52%, 100=57.17%, 250=39.24% 00:27:25.929 lat (msec) : 500=0.55% 00:27:25.929 cpu : usr=1.83%, sys=1.72%, ctx=2127, majf=0, minf=1 00:27:25.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:25.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.929 issued rwts: total=0,6187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.929 job4: (groupid=0, jobs=1): err= 0: pid=3214886: Tue Apr 23 16:26:24 2024 00:27:25.929 write: IOPS=512, BW=128MiB/s (134MB/s)(1293MiB/10096msec); 0 zone resets 00:27:25.929 slat (usec): min=16, max=62149, avg=1899.62, stdev=3514.17 00:27:25.929 clat (msec): min=25, max=237, avg=123.04, stdev=26.17 00:27:25.930 lat (msec): min=25, max=237, avg=124.94, stdev=26.34 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 97], 00:27:25.930 | 30.00th=[ 114], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:27:25.930 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 169], 00:27:25.930 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 239], 99.95th=[ 239], 00:27:25.930 | 99.99th=[ 239] 00:27:25.930 bw ( KiB/s): min=75776, max=172544, per=9.14%, avg=130739.20, stdev=23824.16, samples=20 00:27:25.930 iops : min= 296, max= 674, avg=510.70, stdev=93.06, samples=20 00:27:25.930 lat (msec) : 50=0.23%, 100=21.59%, 250=78.18% 00:27:25.930 cpu : usr=1.82%, sys=1.58%, ctx=1400, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,5170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job5: (groupid=0, jobs=1): err= 0: pid=3214887: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=575, BW=144MiB/s (151MB/s)(1462MiB/10155msec); 0 zone resets 00:27:25.930 slat (usec): min=21, max=74308, avg=1622.19, stdev=3134.14 00:27:25.930 clat (msec): min=2, max=236, avg=109.40, stdev=26.81 00:27:25.930 lat (msec): min=2, max=236, avg=111.03, stdev=27.02 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 23], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 88], 00:27:25.930 | 30.00th=[ 92], 40.00th=[ 104], 50.00th=[ 114], 60.00th=[ 117], 00:27:25.930 | 70.00th=[ 122], 80.00th=[ 128], 90.00th=[ 140], 95.00th=[ 153], 00:27:25.930 | 99.00th=[ 186], 99.50th=[ 207], 99.90th=[ 230], 99.95th=[ 234], 00:27:25.930 | 99.99th=[ 236] 00:27:25.930 bw ( KiB/s): min=106496, max=188416, per=10.35%, avg=148019.20, stdev=23393.83, samples=20 00:27:25.930 iops : min= 416, max= 736, avg=578.20, stdev=91.38, samples=20 00:27:25.930 lat (msec) : 4=0.05%, 10=0.27%, 20=0.51%, 50=1.64%, 100=33.27% 00:27:25.930 lat (msec) : 250=64.25% 00:27:25.930 cpu : usr=1.86%, sys=1.33%, ctx=1856, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,5846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job6: (groupid=0, jobs=1): err= 0: pid=3214888: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=474, BW=119MiB/s (124MB/s)(1205MiB/10167msec); 0 zone resets 00:27:25.930 slat (usec): min=15, max=56467, avg=2036.18, stdev=3850.14 00:27:25.930 clat (msec): min=18, max=341, avg=132.90, stdev=43.41 00:27:25.930 lat (msec): min=18, max=341, avg=134.94, stdev=43.92 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 42], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 103], 00:27:25.930 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 138], 60.00th=[ 148], 00:27:25.930 | 70.00th=[ 161], 80.00th=[ 171], 90.00th=[ 182], 95.00th=[ 197], 00:27:25.930 | 99.00th=[ 215], 99.50th=[ 275], 99.90th=[ 330], 99.95th=[ 330], 00:27:25.930 | 99.99th=[ 342] 00:27:25.930 bw ( KiB/s): min=81920, max=262656, per=8.51%, avg=121779.20, stdev=41903.83, samples=20 00:27:25.930 iops : min= 320, max= 1026, avg=475.70, stdev=163.69, samples=20 00:27:25.930 lat (msec) : 20=0.02%, 50=2.26%, 100=15.35%, 250=81.66%, 500=0.71% 00:27:25.930 cpu : usr=1.59%, sys=1.93%, ctx=1389, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,4820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job7: (groupid=0, jobs=1): err= 0: pid=3214889: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=531, BW=133MiB/s (139MB/s)(1350MiB/10154msec); 0 zone resets 00:27:25.930 slat (usec): min=22, max=44396, avg=1811.64, stdev=3455.85 00:27:25.930 clat (msec): min=24, max=298, avg=118.50, stdev=40.23 00:27:25.930 lat (msec): min=26, max=298, avg=120.31, stdev=40.69 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 47], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 70], 00:27:25.930 | 30.00th=[ 97], 40.00th=[ 111], 50.00th=[ 127], 60.00th=[ 134], 00:27:25.930 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 176], 00:27:25.930 | 99.00th=[ 226], 99.50th=[ 239], 99.90th=[ 292], 99.95th=[ 292], 00:27:25.930 | 99.99th=[ 300] 00:27:25.930 bw ( KiB/s): min=96256, max=251904, per=9.55%, avg=136576.00, stdev=43888.84, samples=20 00:27:25.930 iops : min= 376, max= 984, avg=533.50, stdev=171.44, samples=20 00:27:25.930 lat (msec) : 50=1.04%, 100=31.12%, 250=67.43%, 500=0.41% 00:27:25.930 cpu : usr=1.63%, sys=1.31%, ctx=1517, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,5398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job8: (groupid=0, jobs=1): err= 0: pid=3214890: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=506, BW=127MiB/s (133MB/s)(1278MiB/10096msec); 0 zone resets 00:27:25.930 slat (usec): min=21, max=113500, avg=1953.82, stdev=4470.68 00:27:25.930 clat (msec): min=62, max=305, avg=124.38, stdev=29.39 00:27:25.930 lat (msec): min=62, max=305, avg=126.33, stdev=29.57 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 97], 00:27:25.930 | 30.00th=[ 113], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:27:25.930 | 70.00th=[ 129], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 169], 00:27:25.930 | 99.00th=[ 234], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 292], 00:27:25.930 | 99.99th=[ 305] 00:27:25.930 bw ( KiB/s): min=63488, max=173568, per=9.03%, avg=129254.40, stdev=26079.21, samples=20 00:27:25.930 iops : min= 248, max= 678, avg=504.90, stdev=101.87, samples=20 00:27:25.930 lat (msec) : 100=22.67%, 250=76.86%, 500=0.47% 00:27:25.930 cpu : usr=1.77%, sys=1.42%, ctx=1296, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,5112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job9: (groupid=0, jobs=1): err= 0: pid=3214891: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=422, BW=106MiB/s (111MB/s)(1074MiB/10166msec); 0 zone resets 00:27:25.930 slat (usec): min=15, max=27281, avg=2210.80, stdev=4132.44 00:27:25.930 clat (msec): min=7, max=341, avg=149.17, stdev=36.30 00:27:25.930 lat (msec): min=7, max=341, avg=151.38, stdev=36.73 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 28], 5.00th=[ 69], 10.00th=[ 118], 20.00th=[ 133], 00:27:25.930 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 155], 60.00th=[ 163], 00:27:25.930 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 194], 00:27:25.930 | 99.00th=[ 218], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 334], 00:27:25.930 | 99.99th=[ 342] 00:27:25.930 bw ( KiB/s): min=81920, max=147968, per=7.58%, avg=108375.05, stdev=15039.29, samples=20 00:27:25.930 iops : min= 320, max= 578, avg=423.30, stdev=58.76, samples=20 00:27:25.930 lat (msec) : 10=0.05%, 20=0.35%, 50=3.17%, 100=3.12%, 250=92.53% 00:27:25.930 lat (msec) : 500=0.79% 00:27:25.930 cpu : usr=1.14%, sys=1.35%, ctx=1407, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,4296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 job10: (groupid=0, jobs=1): err= 0: pid=3214892: Tue Apr 23 16:26:24 2024 00:27:25.930 write: IOPS=657, BW=164MiB/s (172MB/s)(1655MiB/10065msec); 0 zone resets 00:27:25.930 slat (usec): min=21, max=98599, avg=1476.75, stdev=3199.65 00:27:25.930 clat (msec): min=10, max=189, avg=95.82, stdev=18.43 00:27:25.930 lat (msec): min=10, max=189, avg=97.29, stdev=18.46 00:27:25.930 clat percentiles (msec): 00:27:25.930 | 1.00th=[ 34], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 86], 00:27:25.930 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:27:25.930 | 70.00th=[ 102], 80.00th=[ 115], 90.00th=[ 120], 95.00th=[ 125], 00:27:25.930 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 184], 00:27:25.930 | 99.99th=[ 190] 00:27:25.930 bw ( KiB/s): min=128512, max=189440, per=11.73%, avg=167823.35, stdev=20678.05, samples=20 00:27:25.930 iops : min= 502, max= 740, avg=655.55, stdev=80.78, samples=20 00:27:25.930 lat (msec) : 20=0.29%, 50=1.60%, 100=67.77%, 250=30.34% 00:27:25.930 cpu : usr=2.25%, sys=2.20%, ctx=1752, majf=0, minf=1 00:27:25.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:25.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:25.930 issued rwts: total=0,6618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.930 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:25.930 00:27:25.930 Run status group 0 (all jobs): 00:27:25.930 WRITE: bw=1397MiB/s (1465MB/s), 106MiB/s-164MiB/s (111MB/s-172MB/s), io=13.9GiB (14.9GB), run=10065-10167msec 00:27:25.930 00:27:25.930 Disk stats (read/write): 00:27:25.931 nvme0n1: ios=49/8666, merge=0/0, ticks=3294/1226656, in_queue=1229950, util=99.68% 00:27:25.931 nvme10n1: ios=46/8540, merge=0/0, ticks=1197/1227936, in_queue=1229133, util=99.97% 00:27:25.931 nvme1n1: ios=13/9368, merge=0/0, ticks=346/1226752, in_queue=1227098, util=97.41% 00:27:25.931 nvme2n1: ios=0/12317, merge=0/0, ticks=0/1229029, in_queue=1229029, util=97.22% 00:27:25.931 nvme3n1: ios=0/10337, merge=0/0, ticks=0/1228964, in_queue=1228964, util=97.33% 00:27:25.931 nvme4n1: ios=47/11637, merge=0/0, ticks=2552/1226377, in_queue=1228929, util=99.88% 00:27:25.931 nvme5n1: ios=32/9580, merge=0/0, ticks=50/1223703, in_queue=1223753, util=98.20% 00:27:25.931 nvme6n1: ios=46/10743, merge=0/0, ticks=712/1225466, in_queue=1226178, util=99.95% 00:27:25.931 nvme7n1: ios=39/10222, merge=0/0, ticks=2475/1213211, in_queue=1215686, util=99.92% 00:27:25.931 nvme8n1: ios=0/8532, merge=0/0, ticks=0/1226066, in_queue=1226066, util=98.95% 00:27:25.931 nvme9n1: ios=45/12872, merge=0/0, ticks=3443/1179053, in_queue=1182496, util=99.93% 00:27:25.931 16:26:24 -- target/multiconnection.sh@36 -- # sync 00:27:25.931 16:26:24 -- target/multiconnection.sh@37 -- # seq 1 11 00:27:25.931 16:26:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.931 16:26:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:25.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:25.931 16:26:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:25.931 16:26:24 -- common/autotest_common.sh@1198 -- # local i=0 00:27:25.931 16:26:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:27:25.931 16:26:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:25.931 16:26:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:25.931 16:26:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:27:25.931 16:26:24 -- common/autotest_common.sh@1210 -- # return 0 00:27:25.931 16:26:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.931 16:26:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:25.931 16:26:24 -- common/autotest_common.sh@10 -- # set +x 00:27:25.931 16:26:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:25.931 16:26:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.931 16:26:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:26.189 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:26.189 16:26:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:26.189 16:26:25 -- common/autotest_common.sh@1198 -- # local i=0 00:27:26.189 16:26:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:26.189 16:26:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:27:26.189 16:26:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:26.189 16:26:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:27:26.189 16:26:25 -- common/autotest_common.sh@1210 -- # return 0 00:27:26.189 16:26:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:26.189 16:26:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.189 16:26:25 -- common/autotest_common.sh@10 -- # set +x 00:27:26.189 16:26:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.189 16:26:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.189 16:26:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:26.755 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:26.755 16:26:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:26.755 16:26:25 -- common/autotest_common.sh@1198 -- # local i=0 00:27:26.755 16:26:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:27:26.755 16:26:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:26.755 16:26:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:26.755 16:26:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:27:26.755 16:26:25 -- common/autotest_common.sh@1210 -- # return 0 00:27:26.755 16:26:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:26.755 16:26:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.755 16:26:25 -- common/autotest_common.sh@10 -- # set +x 00:27:26.755 16:26:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.755 16:26:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.755 16:26:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:27.016 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:27.016 16:26:25 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:27.016 16:26:25 -- common/autotest_common.sh@1198 -- # local i=0 00:27:27.016 16:26:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:27.016 16:26:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:27:27.016 16:26:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:27:27.016 16:26:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:27.016 16:26:25 -- common/autotest_common.sh@1210 -- # return 0 00:27:27.016 16:26:25 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:27.016 16:26:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.016 16:26:25 -- common/autotest_common.sh@10 -- # set +x 00:27:27.016 16:26:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.016 16:26:25 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.016 16:26:25 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:27.583 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:27.583 16:26:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:27.583 16:26:26 -- common/autotest_common.sh@1198 -- # local i=0 00:27:27.583 16:26:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:27.583 16:26:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:27:27.583 16:26:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:27:27.583 16:26:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:27.583 16:26:26 -- common/autotest_common.sh@1210 -- # return 0 00:27:27.583 16:26:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:27.583 16:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.583 16:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:27.583 16:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.583 16:26:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.583 16:26:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:27.844 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:27.844 16:26:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:27.844 16:26:26 -- common/autotest_common.sh@1198 -- # local i=0 00:27:27.844 16:26:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:27.844 16:26:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:27:27.844 16:26:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:27.844 16:26:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:27:27.844 16:26:26 -- common/autotest_common.sh@1210 -- # return 0 00:27:27.844 16:26:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:27.844 16:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.844 16:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:27.844 16:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.844 16:26:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.844 16:26:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:28.103 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:28.103 16:26:26 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:28.103 16:26:26 -- common/autotest_common.sh@1198 -- # local i=0 00:27:28.103 16:26:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:28.103 16:26:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:27:28.103 16:26:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:27:28.103 16:26:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:28.103 16:26:26 -- common/autotest_common.sh@1210 -- # return 0 00:27:28.103 16:26:26 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:28.103 16:26:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.103 16:26:26 -- common/autotest_common.sh@10 -- # set +x 00:27:28.103 16:26:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.103 16:26:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.103 16:26:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:28.361 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:28.361 16:26:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:28.361 16:26:27 -- common/autotest_common.sh@1198 -- # local i=0 00:27:28.361 16:26:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:27:28.361 16:26:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:28.361 16:26:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:28.361 16:26:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:27:28.361 16:26:27 -- common/autotest_common.sh@1210 -- # return 0 00:27:28.361 16:26:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:28.361 16:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.361 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:27:28.361 16:26:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.361 16:26:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.361 16:26:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:28.618 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:28.618 16:26:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:28.618 16:26:27 -- common/autotest_common.sh@1198 -- # local i=0 00:27:28.618 16:26:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:28.618 16:26:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:27:28.618 16:26:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:28.618 16:26:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:27:28.618 16:26:27 -- common/autotest_common.sh@1210 -- # return 0 00:27:28.618 16:26:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:28.619 16:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.619 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:27:28.619 16:26:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.619 16:26:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.619 16:26:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:28.877 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:28.877 16:26:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:28.877 16:26:27 -- common/autotest_common.sh@1198 -- # local i=0 00:27:28.877 16:26:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:27:28.877 16:26:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:28.877 16:26:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:28.877 16:26:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:27:28.877 16:26:27 -- common/autotest_common.sh@1210 -- # return 0 00:27:28.877 16:26:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:28.877 16:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.877 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:27:28.877 16:26:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.877 16:26:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.877 16:26:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:29.135 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:29.135 16:26:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:29.135 16:26:27 -- common/autotest_common.sh@1198 -- # local i=0 00:27:29.135 16:26:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:27:29.135 16:26:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:27:29.135 16:26:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:27:29.135 16:26:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:29.135 16:26:27 -- common/autotest_common.sh@1210 -- # return 0 00:27:29.135 16:26:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:29.135 16:26:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.135 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:27:29.135 16:26:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.135 16:26:28 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:29.135 16:26:28 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:29.135 16:26:28 -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:29.135 16:26:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:29.135 16:26:28 -- nvmf/common.sh@116 -- # sync 00:27:29.135 16:26:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:29.135 16:26:28 -- nvmf/common.sh@119 -- # set +e 00:27:29.135 16:26:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:29.135 16:26:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:29.135 rmmod nvme_tcp 00:27:29.135 rmmod nvme_fabrics 00:27:29.135 rmmod nvme_keyring 00:27:29.396 16:26:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:29.396 16:26:28 -- nvmf/common.sh@123 -- # set -e 00:27:29.396 16:26:28 -- nvmf/common.sh@124 -- # return 0 00:27:29.396 16:26:28 -- nvmf/common.sh@477 -- # '[' -n 3204118 ']' 00:27:29.396 16:26:28 -- nvmf/common.sh@478 -- # killprocess 3204118 00:27:29.396 16:26:28 -- common/autotest_common.sh@926 -- # '[' -z 3204118 ']' 00:27:29.396 16:26:28 -- common/autotest_common.sh@930 -- # kill -0 3204118 00:27:29.396 16:26:28 -- common/autotest_common.sh@931 -- # uname 00:27:29.396 16:26:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.396 16:26:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3204118 00:27:29.396 16:26:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.396 16:26:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.396 16:26:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3204118' 00:27:29.396 killing process with pid 3204118 00:27:29.396 16:26:28 -- common/autotest_common.sh@945 -- # kill 3204118 00:27:29.396 16:26:28 -- common/autotest_common.sh@950 -- # wait 3204118 00:27:30.331 16:26:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:30.331 16:26:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:30.331 16:26:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:30.331 16:26:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.331 16:26:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:30.331 16:26:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.331 16:26:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.331 16:26:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.865 16:26:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:32.865 00:27:32.865 real 1m16.619s 00:27:32.865 user 5m6.057s 00:27:32.865 sys 0m17.918s 00:27:32.865 16:26:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.865 16:26:31 -- common/autotest_common.sh@10 -- # set +x 00:27:32.865 ************************************ 00:27:32.865 END TEST nvmf_multiconnection 00:27:32.865 ************************************ 00:27:32.865 16:26:31 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:32.865 16:26:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:32.865 16:26:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.865 16:26:31 -- common/autotest_common.sh@10 -- # set +x 00:27:32.865 ************************************ 00:27:32.865 START TEST nvmf_initiator_timeout 00:27:32.865 ************************************ 00:27:32.865 16:26:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:32.865 * Looking for test storage... 00:27:32.865 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:32.865 16:26:31 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.865 16:26:31 -- nvmf/common.sh@7 -- # uname -s 00:27:32.865 16:26:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.865 16:26:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.865 16:26:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.865 16:26:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.865 16:26:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.865 16:26:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.865 16:26:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.865 16:26:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.865 16:26:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.865 16:26:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.865 16:26:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:32.865 16:26:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:32.865 16:26:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.865 16:26:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.865 16:26:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:32.865 16:26:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:32.865 16:26:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.865 16:26:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.865 16:26:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.865 16:26:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.865 16:26:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.866 16:26:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.866 16:26:31 -- paths/export.sh@5 -- # export PATH 00:27:32.866 16:26:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.866 16:26:31 -- nvmf/common.sh@46 -- # : 0 00:27:32.866 16:26:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:32.866 16:26:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:32.866 16:26:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:32.866 16:26:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.866 16:26:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.866 16:26:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:32.866 16:26:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:32.866 16:26:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:32.866 16:26:31 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.866 16:26:31 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.866 16:26:31 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:32.866 16:26:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:32.866 16:26:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.866 16:26:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:32.866 16:26:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:32.866 16:26:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:32.866 16:26:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.866 16:26:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.866 16:26:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.866 16:26:31 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:27:32.866 16:26:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:32.866 16:26:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:32.866 16:26:31 -- common/autotest_common.sh@10 -- # set +x 00:27:38.297 16:26:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:38.297 16:26:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:38.297 16:26:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:38.297 16:26:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:38.297 16:26:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:38.297 16:26:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:38.297 16:26:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:38.297 16:26:36 -- nvmf/common.sh@294 -- # net_devs=() 00:27:38.297 16:26:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:38.297 16:26:36 -- nvmf/common.sh@295 -- # e810=() 00:27:38.297 16:26:36 -- nvmf/common.sh@295 -- # local -ga e810 00:27:38.297 16:26:36 -- nvmf/common.sh@296 -- # x722=() 00:27:38.297 16:26:36 -- nvmf/common.sh@296 -- # local -ga x722 00:27:38.297 16:26:36 -- nvmf/common.sh@297 -- # mlx=() 00:27:38.297 16:26:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:38.297 16:26:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.297 16:26:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:38.297 16:26:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:38.297 16:26:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.297 16:26:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:38.297 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:38.297 16:26:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.297 16:26:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:38.297 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:38.297 16:26:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:38.297 16:26:36 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:27:38.297 16:26:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.297 16:26:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.297 16:26:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.297 16:26:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.298 16:26:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:38.298 Found net devices under 0000:27:00.0: cvl_0_0 00:27:38.298 16:26:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.298 16:26:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.298 16:26:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.298 16:26:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.298 16:26:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.298 16:26:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:38.298 Found net devices under 0000:27:00.1: cvl_0_1 00:27:38.298 16:26:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.298 16:26:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:38.298 16:26:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:38.298 16:26:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:38.298 16:26:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:38.298 16:26:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:38.298 16:26:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.298 16:26:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.298 16:26:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.298 16:26:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:38.298 16:26:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.298 16:26:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.298 16:26:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:38.298 16:26:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.298 16:26:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.298 16:26:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:38.298 16:26:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:38.298 16:26:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.298 16:26:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.298 16:26:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.298 16:26:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.298 16:26:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:38.298 16:26:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.298 16:26:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.298 16:26:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.298 16:26:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:38.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:27:38.298 00:27:38.298 --- 10.0.0.2 ping statistics --- 00:27:38.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.298 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:38.298 16:26:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.538 ms 00:27:38.298 00:27:38.298 --- 10.0.0.1 ping statistics --- 00:27:38.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.298 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:27:38.298 16:26:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.298 16:26:36 -- nvmf/common.sh@410 -- # return 0 00:27:38.298 16:26:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:38.298 16:26:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.298 16:26:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:38.298 16:26:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:38.298 16:26:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.298 16:26:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:38.298 16:26:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:38.298 16:26:36 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:38.298 16:26:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:38.298 16:26:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:38.298 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:27:38.298 16:26:36 -- nvmf/common.sh@469 -- # nvmfpid=3221555 00:27:38.298 16:26:36 -- nvmf/common.sh@470 -- # waitforlisten 3221555 00:27:38.298 16:26:36 -- common/autotest_common.sh@819 -- # '[' -z 3221555 ']' 00:27:38.298 16:26:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.298 16:26:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.298 16:26:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.298 16:26:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.298 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:27:38.298 16:26:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.298 [2024-04-23 16:26:37.075282] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:27:38.298 [2024-04-23 16:26:37.075413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.298 [2024-04-23 16:26:37.215338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.556 [2024-04-23 16:26:37.309237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:38.556 [2024-04-23 16:26:37.309409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.556 [2024-04-23 16:26:37.309422] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.556 [2024-04-23 16:26:37.309431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.556 [2024-04-23 16:26:37.309502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.556 [2024-04-23 16:26:37.309527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.556 [2024-04-23 16:26:37.309658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.556 [2024-04-23 16:26:37.309662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.122 16:26:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.122 16:26:37 -- common/autotest_common.sh@852 -- # return 0 00:27:39.122 16:26:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:39.122 16:26:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.122 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.122 16:26:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.122 16:26:37 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:39.122 16:26:37 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:39.122 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.122 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.122 Malloc0 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:39.123 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.123 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.123 Delay0 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.123 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.123 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.123 [2024-04-23 16:26:37.848159] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:39.123 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.123 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.123 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.123 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.123 16:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.123 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:27:39.123 [2024-04-23 16:26:37.876346] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.123 16:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.123 16:26:37 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:40.503 16:26:39 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:40.503 16:26:39 -- common/autotest_common.sh@1177 -- # local i=0 00:27:40.503 16:26:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.503 16:26:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:27:40.503 16:26:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:27:43.035 16:26:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:27:43.035 16:26:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:27:43.035 16:26:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:27:43.035 16:26:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:27:43.035 16:26:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:27:43.035 16:26:41 -- common/autotest_common.sh@1187 -- # return 0 00:27:43.035 16:26:41 -- target/initiator_timeout.sh@35 -- # fio_pid=3222417 00:27:43.035 16:26:41 -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:43.035 16:26:41 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:43.035 [global] 00:27:43.035 thread=1 00:27:43.035 invalidate=1 00:27:43.035 rw=write 00:27:43.035 time_based=1 00:27:43.035 runtime=60 00:27:43.035 ioengine=libaio 00:27:43.035 direct=1 00:27:43.035 bs=4096 00:27:43.035 iodepth=1 00:27:43.035 norandommap=0 00:27:43.035 numjobs=1 00:27:43.035 00:27:43.035 verify_dump=1 00:27:43.035 verify_backlog=512 00:27:43.035 verify_state_save=0 00:27:43.035 do_verify=1 00:27:43.035 verify=crc32c-intel 00:27:43.035 [job0] 00:27:43.035 filename=/dev/nvme0n1 00:27:43.035 Could not set queue depth (nvme0n1) 00:27:43.035 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:43.035 fio-3.35 00:27:43.035 Starting 1 thread 00:27:45.569 16:26:44 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:45.569 16:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:45.569 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.569 true 00:27:45.569 16:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:45.569 16:26:44 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:45.569 16:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:45.569 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.569 true 00:27:45.569 16:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:45.569 16:26:44 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:45.569 16:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:45.569 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.569 true 00:27:45.569 16:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:45.569 16:26:44 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:45.569 16:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:45.569 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:27:45.569 true 00:27:45.569 16:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:45.569 16:26:44 -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:48.855 16:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.855 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:27:48.855 true 00:27:48.855 16:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:48.855 16:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.855 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:27:48.855 true 00:27:48.855 16:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:48.855 16:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.855 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:27:48.855 true 00:27:48.855 16:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:48.855 16:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.855 16:26:47 -- common/autotest_common.sh@10 -- # set +x 00:27:48.855 true 00:27:48.855 16:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:48.855 16:26:47 -- target/initiator_timeout.sh@54 -- # wait 3222417 00:28:45.099 00:28:45.099 job0: (groupid=0, jobs=1): err= 0: pid=3222700: Tue Apr 23 16:27:41 2024 00:28:45.099 read: IOPS=26, BW=107KiB/s (110kB/s)(6448KiB/60020msec) 00:28:45.099 slat (usec): min=3, max=11344, avg=22.65, stdev=282.46 00:28:45.099 clat (usec): min=305, max=41732k, avg=36851.11, stdev=1039298.59 00:28:45.099 lat (usec): min=313, max=41732k, avg=36873.76, stdev=1039298.43 00:28:45.099 clat percentiles (usec): 00:28:45.099 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 00:28:45.099 | 20.00th=[ 355], 30.00th=[ 367], 40.00th=[ 379], 00:28:45.099 | 50.00th=[ 400], 60.00th=[ 469], 70.00th=[ 506], 00:28:45.099 | 80.00th=[ 41681], 90.00th=[ 42206], 95.00th=[ 42206], 00:28:45.099 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:45.099 | 99.95th=[17112761], 99.99th=[17112761] 00:28:45.099 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60020msec); 0 zone resets 00:28:45.099 slat (nsec): min=5448, max=68496, avg=14009.36, stdev=10061.88 00:28:45.099 clat (usec): min=176, max=764, avg=259.39, stdev=45.52 00:28:45.099 lat (usec): min=185, max=803, avg=273.40, stdev=50.88 00:28:45.099 clat percentiles (usec): 00:28:45.099 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:28:45.099 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 260], 00:28:45.099 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 334], 00:28:45.099 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 441], 99.95th=[ 537], 00:28:45.099 | 99.99th=[ 766] 00:28:45.099 bw ( KiB/s): min= 1152, max= 4096, per=100.00%, avg=2730.67, stdev=1201.93, samples=6 00:28:45.099 iops : min= 288, max= 1024, avg=682.67, stdev=300.48, samples=6 00:28:45.099 lat (usec) : 250=26.67%, 500=59.48%, 750=2.46%, 1000=0.11% 00:28:45.099 lat (msec) : 2=0.05%, 50=11.20%, >=2000=0.03% 00:28:45.099 cpu : usr=0.05%, sys=0.09%, ctx=3664, majf=0, minf=1 00:28:45.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:45.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.099 issued rwts: total=1612,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:45.099 00:28:45.099 Run status group 0 (all jobs): 00:28:45.099 READ: bw=107KiB/s (110kB/s), 107KiB/s-107KiB/s (110kB/s-110kB/s), io=6448KiB (6603kB), run=60020-60020msec 00:28:45.099 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60020-60020msec 00:28:45.099 00:28:45.099 Disk stats (read/write): 00:28:45.099 nvme0n1: ios=1661/2048, merge=0/0, ticks=18865/519, in_queue=19384, util=99.91% 00:28:45.099 16:27:41 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:45.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:45.099 16:27:42 -- common/autotest_common.sh@1198 -- # local i=0 00:28:45.099 16:27:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:28:45.099 16:27:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:45.099 16:27:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:45.099 16:27:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:45.099 16:27:42 -- common/autotest_common.sh@1210 -- # return 0 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:45.099 nvmf hotplug test: fio successful as expected 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.099 16:27:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.099 16:27:42 -- common/autotest_common.sh@10 -- # set +x 00:28:45.099 16:27:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:45.099 16:27:42 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:45.099 16:27:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.099 16:27:42 -- nvmf/common.sh@116 -- # sync 00:28:45.099 16:27:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.099 16:27:42 -- nvmf/common.sh@119 -- # set +e 00:28:45.099 16:27:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.099 16:27:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.099 rmmod nvme_tcp 00:28:45.099 rmmod nvme_fabrics 00:28:45.099 rmmod nvme_keyring 00:28:45.099 16:27:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:45.099 16:27:42 -- nvmf/common.sh@123 -- # set -e 00:28:45.099 16:27:42 -- nvmf/common.sh@124 -- # return 0 00:28:45.099 16:27:42 -- nvmf/common.sh@477 -- # '[' -n 3221555 ']' 00:28:45.099 16:27:42 -- nvmf/common.sh@478 -- # killprocess 3221555 00:28:45.099 16:27:42 -- common/autotest_common.sh@926 -- # '[' -z 3221555 ']' 00:28:45.099 16:27:42 -- common/autotest_common.sh@930 -- # kill -0 3221555 00:28:45.099 16:27:42 -- common/autotest_common.sh@931 -- # uname 00:28:45.099 16:27:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.099 16:27:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3221555 00:28:45.099 16:27:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:45.099 16:27:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:45.099 16:27:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3221555' 00:28:45.099 killing process with pid 3221555 00:28:45.099 16:27:42 -- common/autotest_common.sh@945 -- # kill 3221555 00:28:45.099 16:27:42 -- common/autotest_common.sh@950 -- # wait 3221555 00:28:45.099 16:27:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:45.099 16:27:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:45.099 16:27:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:45.099 16:27:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.099 16:27:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:45.099 16:27:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.099 16:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.099 16:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.033 16:27:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:46.033 00:28:46.033 real 1m13.509s 00:28:46.033 user 4m40.119s 00:28:46.034 sys 0m5.403s 00:28:46.034 16:27:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.034 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.034 ************************************ 00:28:46.034 END TEST nvmf_initiator_timeout 00:28:46.034 ************************************ 00:28:46.034 16:27:44 -- nvmf/nvmf.sh@69 -- # [[ phy-fallback == phy ]] 00:28:46.034 16:27:44 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:28:46.034 16:27:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:46.034 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.034 16:27:44 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:28:46.034 16:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:46.034 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.034 16:27:44 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:28:46.034 16:27:44 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:46.034 16:27:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:46.034 16:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:46.034 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:28:46.034 ************************************ 00:28:46.034 START TEST nvmf_multicontroller 00:28:46.034 ************************************ 00:28:46.034 16:27:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:46.294 * Looking for test storage... 00:28:46.294 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:28:46.294 16:27:45 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.294 16:27:45 -- nvmf/common.sh@7 -- # uname -s 00:28:46.294 16:27:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.294 16:27:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.294 16:27:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.294 16:27:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.294 16:27:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.295 16:27:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.295 16:27:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.295 16:27:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.295 16:27:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.295 16:27:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.295 16:27:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:46.295 16:27:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:46.295 16:27:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.295 16:27:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.295 16:27:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:46.295 16:27:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:46.295 16:27:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.295 16:27:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.295 16:27:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.295 16:27:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.295 16:27:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.295 16:27:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.295 16:27:45 -- paths/export.sh@5 -- # export PATH 00:28:46.295 16:27:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.295 16:27:45 -- nvmf/common.sh@46 -- # : 0 00:28:46.295 16:27:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:46.295 16:27:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:46.295 16:27:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:46.295 16:27:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.295 16:27:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.295 16:27:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:46.295 16:27:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:46.295 16:27:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:46.295 16:27:45 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:46.295 16:27:45 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:46.295 16:27:45 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:46.295 16:27:45 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:46.295 16:27:45 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:46.295 16:27:45 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:46.295 16:27:45 -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:46.295 16:27:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:46.295 16:27:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.295 16:27:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:46.295 16:27:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:46.295 16:27:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:46.295 16:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.295 16:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.295 16:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.295 16:27:45 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:28:46.295 16:27:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:46.295 16:27:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:46.295 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:28:51.572 16:27:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:51.572 16:27:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:51.572 16:27:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:51.572 16:27:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:51.572 16:27:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:51.572 16:27:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:51.572 16:27:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:51.572 16:27:50 -- nvmf/common.sh@294 -- # net_devs=() 00:28:51.572 16:27:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:51.572 16:27:50 -- nvmf/common.sh@295 -- # e810=() 00:28:51.572 16:27:50 -- nvmf/common.sh@295 -- # local -ga e810 00:28:51.572 16:27:50 -- nvmf/common.sh@296 -- # x722=() 00:28:51.572 16:27:50 -- nvmf/common.sh@296 -- # local -ga x722 00:28:51.572 16:27:50 -- nvmf/common.sh@297 -- # mlx=() 00:28:51.572 16:27:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:51.572 16:27:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.572 16:27:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:51.572 16:27:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:51.572 16:27:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:51.572 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:51.572 16:27:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:51.572 16:27:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:51.572 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:51.572 16:27:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:51.572 16:27:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.572 16:27:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.572 16:27:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:51.572 Found net devices under 0000:27:00.0: cvl_0_0 00:28:51.572 16:27:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.572 16:27:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:51.572 16:27:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.572 16:27:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.572 16:27:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:51.572 Found net devices under 0000:27:00.1: cvl_0_1 00:28:51.572 16:27:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.572 16:27:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:51.572 16:27:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:51.572 16:27:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.572 16:27:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.572 16:27:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.572 16:27:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:51.572 16:27:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.572 16:27:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.572 16:27:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:51.572 16:27:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.572 16:27:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.572 16:27:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:51.572 16:27:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:51.572 16:27:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.572 16:27:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.572 16:27:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.572 16:27:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.572 16:27:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:51.572 16:27:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.572 16:27:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.572 16:27:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.572 16:27:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:51.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:51.572 00:28:51.572 --- 10.0.0.2 ping statistics --- 00:28:51.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.572 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:51.572 16:27:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:28:51.572 00:28:51.572 --- 10.0.0.1 ping statistics --- 00:28:51.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.572 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:28:51.572 16:27:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.572 16:27:50 -- nvmf/common.sh@410 -- # return 0 00:28:51.572 16:27:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:51.572 16:27:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.572 16:27:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:51.572 16:27:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.572 16:27:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:51.572 16:27:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:51.832 16:27:50 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:51.832 16:27:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:51.832 16:27:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:51.832 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:28:51.832 16:27:50 -- nvmf/common.sh@469 -- # nvmfpid=3238934 00:28:51.832 16:27:50 -- nvmf/common.sh@470 -- # waitforlisten 3238934 00:28:51.832 16:27:50 -- common/autotest_common.sh@819 -- # '[' -z 3238934 ']' 00:28:51.832 16:27:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.832 16:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.832 16:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:51.832 16:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.832 16:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:51.832 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:28:51.832 [2024-04-23 16:27:50.591516] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:51.832 [2024-04-23 16:27:50.591618] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.832 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.832 [2024-04-23 16:27:50.710733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:52.093 [2024-04-23 16:27:50.807777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:52.093 [2024-04-23 16:27:50.807947] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.093 [2024-04-23 16:27:50.807960] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.093 [2024-04-23 16:27:50.807968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.093 [2024-04-23 16:27:50.808021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.093 [2024-04-23 16:27:50.808056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.093 [2024-04-23 16:27:50.808066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.667 16:27:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:52.667 16:27:51 -- common/autotest_common.sh@852 -- # return 0 00:28:52.667 16:27:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:52.667 16:27:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.667 16:27:51 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 [2024-04-23 16:27:51.349519] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 Malloc0 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 [2024-04-23 16:27:51.430726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 [2024-04-23 16:27:51.438726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 Malloc1 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:52.667 16:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:52.667 16:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.667 16:27:51 -- host/multicontroller.sh@44 -- # bdevperf_pid=3239081 00:28:52.667 16:27:51 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:52.667 16:27:51 -- host/multicontroller.sh@47 -- # waitforlisten 3239081 /var/tmp/bdevperf.sock 00:28:52.667 16:27:51 -- common/autotest_common.sh@819 -- # '[' -z 3239081 ']' 00:28:52.667 16:27:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:52.667 16:27:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.667 16:27:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:52.667 16:27:51 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:52.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:52.667 16:27:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.667 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 16:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.607 16:27:52 -- common/autotest_common.sh@852 -- # return 0 00:28:53.607 16:27:52 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 NVMe0n1 00:28:53.607 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.607 16:27:52 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 16:27:52 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:53.607 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.607 1 00:28:53.607 16:27:52 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:53.607 16:27:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:53.607 16:27:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:53.607 16:27:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 request: 00:28:53.607 { 00:28:53.607 "name": "NVMe0", 00:28:53.607 "trtype": "tcp", 00:28:53.607 "traddr": "10.0.0.2", 00:28:53.607 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:53.607 "hostaddr": "10.0.0.2", 00:28:53.607 "hostsvcid": "60000", 00:28:53.607 "adrfam": "ipv4", 00:28:53.607 "trsvcid": "4420", 00:28:53.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.607 "method": "bdev_nvme_attach_controller", 00:28:53.607 "req_id": 1 00:28:53.607 } 00:28:53.607 Got JSON-RPC error response 00:28:53.607 response: 00:28:53.607 { 00:28:53.607 "code": -114, 00:28:53.607 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:53.607 } 00:28:53.607 16:27:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # es=1 00:28:53.607 16:27:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:53.607 16:27:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:53.607 16:27:52 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:53.607 16:27:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:53.607 16:27:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:53.607 16:27:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 request: 00:28:53.607 { 00:28:53.607 "name": "NVMe0", 00:28:53.607 "trtype": "tcp", 00:28:53.607 "traddr": "10.0.0.2", 00:28:53.607 "hostaddr": "10.0.0.2", 00:28:53.607 "hostsvcid": "60000", 00:28:53.607 "adrfam": "ipv4", 00:28:53.607 "trsvcid": "4420", 00:28:53.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.607 "method": "bdev_nvme_attach_controller", 00:28:53.607 "req_id": 1 00:28:53.607 } 00:28:53.607 Got JSON-RPC error response 00:28:53.607 response: 00:28:53.607 { 00:28:53.607 "code": -114, 00:28:53.607 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:53.607 } 00:28:53.607 16:27:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # es=1 00:28:53.607 16:27:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:53.607 16:27:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:53.607 16:27:52 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:53.607 16:27:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 request: 00:28:53.607 { 00:28:53.607 "name": "NVMe0", 00:28:53.607 "trtype": "tcp", 00:28:53.607 "traddr": "10.0.0.2", 00:28:53.607 "hostaddr": "10.0.0.2", 00:28:53.607 "hostsvcid": "60000", 00:28:53.607 "adrfam": "ipv4", 00:28:53.607 "trsvcid": "4420", 00:28:53.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.607 "multipath": "disable", 00:28:53.607 "method": "bdev_nvme_attach_controller", 00:28:53.607 "req_id": 1 00:28:53.607 } 00:28:53.607 Got JSON-RPC error response 00:28:53.607 response: 00:28:53.607 { 00:28:53.607 "code": -114, 00:28:53.607 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:53.607 } 00:28:53.607 16:27:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # es=1 00:28:53.607 16:27:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:53.607 16:27:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:53.607 16:27:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:53.607 16:27:52 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:53.607 16:27:52 -- common/autotest_common.sh@640 -- # local es=0 00:28:53.607 16:27:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:53.607 16:27:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:53.607 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:53.607 16:27:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:53.607 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.607 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.607 request: 00:28:53.607 { 00:28:53.607 "name": "NVMe0", 00:28:53.607 "trtype": "tcp", 00:28:53.608 "traddr": "10.0.0.2", 00:28:53.608 "hostaddr": "10.0.0.2", 00:28:53.608 "hostsvcid": "60000", 00:28:53.608 "adrfam": "ipv4", 00:28:53.608 "trsvcid": "4420", 00:28:53.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.608 "multipath": "failover", 00:28:53.608 "method": "bdev_nvme_attach_controller", 00:28:53.608 "req_id": 1 00:28:53.608 } 00:28:53.608 Got JSON-RPC error response 00:28:53.608 response: 00:28:53.608 { 00:28:53.608 "code": -114, 00:28:53.608 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:53.608 } 00:28:53.608 16:27:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:53.608 16:27:52 -- common/autotest_common.sh@643 -- # es=1 00:28:53.608 16:27:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:53.608 16:27:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:53.608 16:27:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:53.608 16:27:52 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:53.608 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.608 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.866 00:28:53.866 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.866 16:27:52 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:53.866 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.866 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.866 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.866 16:27:52 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:53.866 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.866 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.866 00:28:53.867 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.867 16:27:52 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:53.867 16:27:52 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:53.867 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.867 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:28:53.867 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.867 16:27:52 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:53.867 16:27:52 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:55.250 0 00:28:55.250 16:27:53 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:55.250 16:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.250 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:28:55.250 16:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.250 16:27:53 -- host/multicontroller.sh@100 -- # killprocess 3239081 00:28:55.250 16:27:53 -- common/autotest_common.sh@926 -- # '[' -z 3239081 ']' 00:28:55.250 16:27:53 -- common/autotest_common.sh@930 -- # kill -0 3239081 00:28:55.250 16:27:53 -- common/autotest_common.sh@931 -- # uname 00:28:55.250 16:27:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.250 16:27:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3239081 00:28:55.250 16:27:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:55.250 16:27:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:55.250 16:27:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3239081' 00:28:55.250 killing process with pid 3239081 00:28:55.250 16:27:53 -- common/autotest_common.sh@945 -- # kill 3239081 00:28:55.250 16:27:53 -- common/autotest_common.sh@950 -- # wait 3239081 00:28:55.508 16:27:54 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.508 16:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.508 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:28:55.508 16:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.508 16:27:54 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:55.508 16:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.508 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:28:55.508 16:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.508 16:27:54 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:55.508 16:27:54 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:55.508 16:27:54 -- common/autotest_common.sh@1597 -- # read -r file 00:28:55.508 16:27:54 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:55.508 16:27:54 -- common/autotest_common.sh@1596 -- # sort -u 00:28:55.508 16:27:54 -- common/autotest_common.sh@1598 -- # cat 00:28:55.508 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:55.509 [2024-04-23 16:27:51.591958] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:28:55.509 [2024-04-23 16:27:51.592110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239081 ] 00:28:55.509 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.509 [2024-04-23 16:27:51.723091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.509 [2024-04-23 16:27:51.816955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.509 [2024-04-23 16:27:52.768988] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 5c7bc533-bab1-48d7-a969-eaacc9209309 already exists 00:28:55.509 [2024-04-23 16:27:52.769029] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:5c7bc533-bab1-48d7-a969-eaacc9209309 alias for bdev NVMe1n1 00:28:55.509 [2024-04-23 16:27:52.769044] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:55.509 Running I/O for 1 seconds... 00:28:55.509 00:28:55.509 Latency(us) 00:28:55.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.509 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:55.509 NVMe0n1 : 1.00 25306.04 98.85 0.00 0.00 5042.84 1784.99 9037.07 00:28:55.509 =================================================================================================================== 00:28:55.509 Total : 25306.04 98.85 0.00 0.00 5042.84 1784.99 9037.07 00:28:55.509 Received shutdown signal, test time was about 1.000000 seconds 00:28:55.509 00:28:55.509 Latency(us) 00:28:55.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.509 =================================================================================================================== 00:28:55.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.509 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:55.509 16:27:54 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:55.509 16:27:54 -- common/autotest_common.sh@1597 -- # read -r file 00:28:55.509 16:27:54 -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:55.509 16:27:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:55.509 16:27:54 -- nvmf/common.sh@116 -- # sync 00:28:55.509 16:27:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:55.509 16:27:54 -- nvmf/common.sh@119 -- # set +e 00:28:55.509 16:27:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:55.509 16:27:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:55.509 rmmod nvme_tcp 00:28:55.509 rmmod nvme_fabrics 00:28:55.509 rmmod nvme_keyring 00:28:55.509 16:27:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:55.509 16:27:54 -- nvmf/common.sh@123 -- # set -e 00:28:55.509 16:27:54 -- nvmf/common.sh@124 -- # return 0 00:28:55.509 16:27:54 -- nvmf/common.sh@477 -- # '[' -n 3238934 ']' 00:28:55.509 16:27:54 -- nvmf/common.sh@478 -- # killprocess 3238934 00:28:55.509 16:27:54 -- common/autotest_common.sh@926 -- # '[' -z 3238934 ']' 00:28:55.509 16:27:54 -- common/autotest_common.sh@930 -- # kill -0 3238934 00:28:55.509 16:27:54 -- common/autotest_common.sh@931 -- # uname 00:28:55.509 16:27:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:55.509 16:27:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3238934 00:28:55.768 16:27:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:55.768 16:27:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:55.768 16:27:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3238934' 00:28:55.768 killing process with pid 3238934 00:28:55.768 16:27:54 -- common/autotest_common.sh@945 -- # kill 3238934 00:28:55.768 16:27:54 -- common/autotest_common.sh@950 -- # wait 3238934 00:28:56.337 16:27:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:56.337 16:27:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:56.337 16:27:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:56.337 16:27:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.337 16:27:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:56.337 16:27:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.337 16:27:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.337 16:27:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.245 16:27:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:58.245 00:28:58.245 real 0m12.133s 00:28:58.245 user 0m16.798s 00:28:58.245 sys 0m4.849s 00:28:58.245 16:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.245 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:28:58.245 ************************************ 00:28:58.245 END TEST nvmf_multicontroller 00:28:58.245 ************************************ 00:28:58.245 16:27:57 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:58.245 16:27:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:58.245 16:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:58.246 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:28:58.246 ************************************ 00:28:58.246 START TEST nvmf_aer 00:28:58.246 ************************************ 00:28:58.246 16:27:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:58.246 * Looking for test storage... 00:28:58.246 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:28:58.246 16:27:57 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.246 16:27:57 -- nvmf/common.sh@7 -- # uname -s 00:28:58.246 16:27:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.246 16:27:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.246 16:27:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.246 16:27:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.246 16:27:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.246 16:27:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.246 16:27:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.246 16:27:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.246 16:27:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.246 16:27:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.507 16:27:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:58.507 16:27:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:58.507 16:27:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.507 16:27:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.507 16:27:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:58.507 16:27:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:58.507 16:27:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.507 16:27:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.507 16:27:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.507 16:27:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.507 16:27:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.507 16:27:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.507 16:27:57 -- paths/export.sh@5 -- # export PATH 00:28:58.507 16:27:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.507 16:27:57 -- nvmf/common.sh@46 -- # : 0 00:28:58.507 16:27:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:58.507 16:27:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:58.507 16:27:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:58.507 16:27:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.507 16:27:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.507 16:27:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:58.507 16:27:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:58.507 16:27:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:58.507 16:27:57 -- host/aer.sh@11 -- # nvmftestinit 00:28:58.507 16:27:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:58.507 16:27:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.507 16:27:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:58.507 16:27:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:58.507 16:27:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:58.507 16:27:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.507 16:27:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.507 16:27:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.507 16:27:57 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:28:58.507 16:27:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:58.507 16:27:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:58.507 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:29:03.793 16:28:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:03.794 16:28:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:03.794 16:28:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:03.794 16:28:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:03.794 16:28:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:03.794 16:28:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:03.794 16:28:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:03.794 16:28:02 -- nvmf/common.sh@294 -- # net_devs=() 00:29:03.794 16:28:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:03.794 16:28:02 -- nvmf/common.sh@295 -- # e810=() 00:29:03.794 16:28:02 -- nvmf/common.sh@295 -- # local -ga e810 00:29:03.794 16:28:02 -- nvmf/common.sh@296 -- # x722=() 00:29:03.794 16:28:02 -- nvmf/common.sh@296 -- # local -ga x722 00:29:03.794 16:28:02 -- nvmf/common.sh@297 -- # mlx=() 00:29:03.794 16:28:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:03.794 16:28:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.794 16:28:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:03.794 16:28:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:03.794 16:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:03.794 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:03.794 16:28:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:03.794 16:28:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:03.794 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:03.794 16:28:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:03.794 16:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.794 16:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.794 16:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:03.794 Found net devices under 0000:27:00.0: cvl_0_0 00:29:03.794 16:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.794 16:28:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:03.794 16:28:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.794 16:28:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.794 16:28:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:03.794 Found net devices under 0000:27:00.1: cvl_0_1 00:29:03.794 16:28:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.794 16:28:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:03.794 16:28:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:03.794 16:28:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.794 16:28:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.794 16:28:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.794 16:28:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:03.794 16:28:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.794 16:28:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.794 16:28:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:03.794 16:28:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.794 16:28:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.794 16:28:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:03.794 16:28:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:03.794 16:28:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.794 16:28:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.794 16:28:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.794 16:28:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.794 16:28:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:03.794 16:28:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.794 16:28:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.794 16:28:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.794 16:28:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:03.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:29:03.794 00:29:03.794 --- 10.0.0.2 ping statistics --- 00:29:03.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.794 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:29:03.794 16:28:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:29:03.794 00:29:03.794 --- 10.0.0.1 ping statistics --- 00:29:03.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.794 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:29:03.794 16:28:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.794 16:28:02 -- nvmf/common.sh@410 -- # return 0 00:29:03.794 16:28:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:03.794 16:28:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.794 16:28:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:03.794 16:28:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.794 16:28:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:03.794 16:28:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:03.794 16:28:02 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:03.794 16:28:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:03.794 16:28:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:03.794 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:29:03.794 16:28:02 -- nvmf/common.sh@469 -- # nvmfpid=3243603 00:29:03.794 16:28:02 -- nvmf/common.sh@470 -- # waitforlisten 3243603 00:29:03.794 16:28:02 -- common/autotest_common.sh@819 -- # '[' -z 3243603 ']' 00:29:03.794 16:28:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.794 16:28:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:03.794 16:28:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:03.794 16:28:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.794 16:28:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:03.794 16:28:02 -- common/autotest_common.sh@10 -- # set +x 00:29:03.794 [2024-04-23 16:28:02.540254] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:03.794 [2024-04-23 16:28:02.540364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.794 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.794 [2024-04-23 16:28:02.662983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.054 [2024-04-23 16:28:02.756706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:04.054 [2024-04-23 16:28:02.756880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.054 [2024-04-23 16:28:02.756895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.054 [2024-04-23 16:28:02.756906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.054 [2024-04-23 16:28:02.756976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.054 [2024-04-23 16:28:02.757000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.054 [2024-04-23 16:28:02.757034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.054 [2024-04-23 16:28:02.757022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.313 16:28:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.313 16:28:03 -- common/autotest_common.sh@852 -- # return 0 00:29:04.313 16:28:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:04.313 16:28:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:04.313 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 16:28:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.573 16:28:03 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 [2024-04-23 16:28:03.265957] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 Malloc0 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 [2024-04-23 16:28:03.334161] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:04.573 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.573 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.573 [2024-04-23 16:28:03.341866] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:04.573 [ 00:29:04.573 { 00:29:04.573 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:04.573 "subtype": "Discovery", 00:29:04.573 "listen_addresses": [], 00:29:04.573 "allow_any_host": true, 00:29:04.573 "hosts": [] 00:29:04.573 }, 00:29:04.573 { 00:29:04.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.573 "subtype": "NVMe", 00:29:04.573 "listen_addresses": [ 00:29:04.573 { 00:29:04.573 "transport": "TCP", 00:29:04.573 "trtype": "TCP", 00:29:04.573 "adrfam": "IPv4", 00:29:04.573 "traddr": "10.0.0.2", 00:29:04.573 "trsvcid": "4420" 00:29:04.573 } 00:29:04.573 ], 00:29:04.573 "allow_any_host": true, 00:29:04.573 "hosts": [], 00:29:04.573 "serial_number": "SPDK00000000000001", 00:29:04.573 "model_number": "SPDK bdev Controller", 00:29:04.573 "max_namespaces": 2, 00:29:04.573 "min_cntlid": 1, 00:29:04.573 "max_cntlid": 65519, 00:29:04.573 "namespaces": [ 00:29:04.573 { 00:29:04.573 "nsid": 1, 00:29:04.573 "bdev_name": "Malloc0", 00:29:04.573 "name": "Malloc0", 00:29:04.573 "nguid": "BFAB4CFBFAA049A1BCC7BB0444845F4E", 00:29:04.573 "uuid": "bfab4cfb-faa0-49a1-bcc7-bb0444845f4e" 00:29:04.573 } 00:29:04.573 ] 00:29:04.573 } 00:29:04.573 ] 00:29:04.573 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.573 16:28:03 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:04.573 16:28:03 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:04.573 16:28:03 -- host/aer.sh@33 -- # aerpid=3243903 00:29:04.573 16:28:03 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:04.573 16:28:03 -- common/autotest_common.sh@1244 -- # local i=0 00:29:04.573 16:28:03 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:04.573 16:28:03 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:29:04.573 16:28:03 -- common/autotest_common.sh@1247 -- # i=1 00:29:04.573 16:28:03 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:04.573 16:28:03 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:04.573 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.573 16:28:03 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:04.573 16:28:03 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:29:04.573 16:28:03 -- common/autotest_common.sh@1247 -- # i=2 00:29:04.573 16:28:03 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:29:04.835 16:28:03 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:04.835 16:28:03 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:04.835 16:28:03 -- common/autotest_common.sh@1255 -- # return 0 00:29:04.835 16:28:03 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:04.835 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.835 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.835 Malloc1 00:29:04.835 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.835 16:28:03 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:04.835 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.835 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.835 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.835 16:28:03 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:04.835 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.835 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.835 [ 00:29:04.835 { 00:29:04.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:04.835 "subtype": "Discovery", 00:29:04.835 "listen_addresses": [], 00:29:04.835 "allow_any_host": true, 00:29:04.835 "hosts": [] 00:29:04.835 }, 00:29:04.835 { 00:29:04.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.835 "subtype": "NVMe", 00:29:04.835 "listen_addresses": [ 00:29:04.835 { 00:29:04.835 "transport": "TCP", 00:29:04.835 "trtype": "TCP", 00:29:04.835 "adrfam": "IPv4", 00:29:04.835 "traddr": "10.0.0.2", 00:29:04.835 "trsvcid": "4420" 00:29:04.835 } 00:29:04.835 ], 00:29:04.835 "allow_any_host": true, 00:29:04.835 "hosts": [], 00:29:04.835 "serial_number": "SPDK00000000000001", 00:29:04.835 "model_number": "SPDK bdev Controller", 00:29:04.835 "max_namespaces": 2, 00:29:04.835 "min_cntlid": 1, 00:29:04.835 "max_cntlid": 65519, 00:29:04.835 "namespaces": [ 00:29:04.835 { 00:29:04.835 "nsid": 1, 00:29:04.835 "bdev_name": "Malloc0", 00:29:04.835 "name": "Malloc0", 00:29:04.835 "nguid": "BFAB4CFBFAA049A1BCC7BB0444845F4E", 00:29:04.835 "uuid": "bfab4cfb-faa0-49a1-bcc7-bb0444845f4e" 00:29:04.835 }, 00:29:04.835 { 00:29:04.835 "nsid": 2, 00:29:04.835 "bdev_name": "Malloc1", 00:29:04.835 "name": "Malloc1", 00:29:04.835 "nguid": "44FFC5E6353C4D4F828455398DF38A83", 00:29:04.835 "uuid": "44ffc5e6-353c-4d4f-8284-55398df38a83" 00:29:04.835 } 00:29:04.835 ] 00:29:04.835 } 00:29:04.835 ] 00:29:04.835 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.835 16:28:03 -- host/aer.sh@43 -- # wait 3243903 00:29:04.835 Asynchronous Event Request test 00:29:04.835 Attaching to 10.0.0.2 00:29:04.835 Attached to 10.0.0.2 00:29:04.835 Registering asynchronous event callbacks... 00:29:04.835 Starting namespace attribute notice tests for all controllers... 00:29:04.835 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:04.835 aer_cb - Changed Namespace 00:29:04.835 Cleaning up... 00:29:04.835 16:28:03 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:04.835 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.835 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:04.835 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.835 16:28:03 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:04.835 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.835 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:05.095 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.095 16:28:03 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.095 16:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.095 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:05.095 16:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.095 16:28:03 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:05.095 16:28:03 -- host/aer.sh@51 -- # nvmftestfini 00:29:05.095 16:28:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:05.095 16:28:03 -- nvmf/common.sh@116 -- # sync 00:29:05.095 16:28:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:05.095 16:28:03 -- nvmf/common.sh@119 -- # set +e 00:29:05.095 16:28:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:05.095 16:28:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:05.095 rmmod nvme_tcp 00:29:05.095 rmmod nvme_fabrics 00:29:05.095 rmmod nvme_keyring 00:29:05.095 16:28:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:05.095 16:28:03 -- nvmf/common.sh@123 -- # set -e 00:29:05.095 16:28:03 -- nvmf/common.sh@124 -- # return 0 00:29:05.095 16:28:03 -- nvmf/common.sh@477 -- # '[' -n 3243603 ']' 00:29:05.095 16:28:03 -- nvmf/common.sh@478 -- # killprocess 3243603 00:29:05.095 16:28:03 -- common/autotest_common.sh@926 -- # '[' -z 3243603 ']' 00:29:05.095 16:28:03 -- common/autotest_common.sh@930 -- # kill -0 3243603 00:29:05.095 16:28:03 -- common/autotest_common.sh@931 -- # uname 00:29:05.095 16:28:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.095 16:28:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3243603 00:29:05.095 16:28:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:05.095 16:28:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:05.095 16:28:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3243603' 00:29:05.095 killing process with pid 3243603 00:29:05.095 16:28:03 -- common/autotest_common.sh@945 -- # kill 3243603 00:29:05.095 [2024-04-23 16:28:03.961670] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:05.095 16:28:03 -- common/autotest_common.sh@950 -- # wait 3243603 00:29:05.662 16:28:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:05.662 16:28:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:05.662 16:28:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:05.662 16:28:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.662 16:28:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:05.662 16:28:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.662 16:28:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.662 16:28:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.565 16:28:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:07.565 00:29:07.565 real 0m9.383s 00:29:07.565 user 0m7.616s 00:29:07.565 sys 0m4.378s 00:29:07.565 16:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.565 16:28:06 -- common/autotest_common.sh@10 -- # set +x 00:29:07.565 ************************************ 00:29:07.565 END TEST nvmf_aer 00:29:07.565 ************************************ 00:29:07.824 16:28:06 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:07.824 16:28:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:07.824 16:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.824 16:28:06 -- common/autotest_common.sh@10 -- # set +x 00:29:07.824 ************************************ 00:29:07.824 START TEST nvmf_async_init 00:29:07.824 ************************************ 00:29:07.824 16:28:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:07.824 * Looking for test storage... 00:29:07.824 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:07.824 16:28:06 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.824 16:28:06 -- nvmf/common.sh@7 -- # uname -s 00:29:07.824 16:28:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.824 16:28:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.824 16:28:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.824 16:28:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.824 16:28:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.824 16:28:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.824 16:28:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.824 16:28:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.824 16:28:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.824 16:28:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.824 16:28:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:07.824 16:28:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:07.824 16:28:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.824 16:28:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.825 16:28:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:07.825 16:28:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:07.825 16:28:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.825 16:28:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.825 16:28:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.825 16:28:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.825 16:28:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.825 16:28:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.825 16:28:06 -- paths/export.sh@5 -- # export PATH 00:29:07.825 16:28:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.825 16:28:06 -- nvmf/common.sh@46 -- # : 0 00:29:07.825 16:28:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:07.825 16:28:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:07.825 16:28:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:07.825 16:28:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.825 16:28:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.825 16:28:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:07.825 16:28:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:07.825 16:28:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:07.825 16:28:06 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:07.825 16:28:06 -- host/async_init.sh@14 -- # null_block_size=512 00:29:07.825 16:28:06 -- host/async_init.sh@15 -- # null_bdev=null0 00:29:07.825 16:28:06 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:07.825 16:28:06 -- host/async_init.sh@20 -- # uuidgen 00:29:07.825 16:28:06 -- host/async_init.sh@20 -- # tr -d - 00:29:07.825 16:28:06 -- host/async_init.sh@20 -- # nguid=19c5ff3f96b24e12b308dea9118844b0 00:29:07.825 16:28:06 -- host/async_init.sh@22 -- # nvmftestinit 00:29:07.825 16:28:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:07.825 16:28:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.825 16:28:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:07.825 16:28:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:07.825 16:28:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:07.825 16:28:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.825 16:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.825 16:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.825 16:28:06 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:07.825 16:28:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:07.825 16:28:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:07.825 16:28:06 -- common/autotest_common.sh@10 -- # set +x 00:29:13.102 16:28:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:13.102 16:28:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:13.102 16:28:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:13.102 16:28:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:13.102 16:28:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:13.102 16:28:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:13.102 16:28:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:13.102 16:28:11 -- nvmf/common.sh@294 -- # net_devs=() 00:29:13.102 16:28:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:13.102 16:28:11 -- nvmf/common.sh@295 -- # e810=() 00:29:13.102 16:28:11 -- nvmf/common.sh@295 -- # local -ga e810 00:29:13.102 16:28:11 -- nvmf/common.sh@296 -- # x722=() 00:29:13.102 16:28:11 -- nvmf/common.sh@296 -- # local -ga x722 00:29:13.102 16:28:11 -- nvmf/common.sh@297 -- # mlx=() 00:29:13.102 16:28:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:13.102 16:28:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.102 16:28:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:13.102 16:28:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.102 16:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:13.102 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:13.102 16:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.102 16:28:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:13.102 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:13.102 16:28:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.102 16:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.102 16:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.102 16:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:13.102 Found net devices under 0000:27:00.0: cvl_0_0 00:29:13.102 16:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.102 16:28:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.102 16:28:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.102 16:28:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.102 16:28:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:13.102 Found net devices under 0000:27:00.1: cvl_0_1 00:29:13.102 16:28:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.102 16:28:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:13.102 16:28:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:13.102 16:28:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:13.102 16:28:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.102 16:28:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.102 16:28:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.102 16:28:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:13.102 16:28:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.102 16:28:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.102 16:28:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:13.102 16:28:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.102 16:28:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.102 16:28:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:13.102 16:28:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:13.103 16:28:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.103 16:28:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.103 16:28:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.103 16:28:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.103 16:28:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:13.103 16:28:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.364 16:28:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.364 16:28:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.364 16:28:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:13.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:29:13.364 00:29:13.364 --- 10.0.0.2 ping statistics --- 00:29:13.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.364 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:29:13.364 16:28:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:29:13.364 00:29:13.364 --- 10.0.0.1 ping statistics --- 00:29:13.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.364 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:29:13.364 16:28:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.364 16:28:12 -- nvmf/common.sh@410 -- # return 0 00:29:13.364 16:28:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:13.364 16:28:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.364 16:28:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:13.364 16:28:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:13.364 16:28:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.364 16:28:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:13.364 16:28:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:13.364 16:28:12 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:13.364 16:28:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:13.364 16:28:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:13.364 16:28:12 -- common/autotest_common.sh@10 -- # set +x 00:29:13.364 16:28:12 -- nvmf/common.sh@469 -- # nvmfpid=3248086 00:29:13.364 16:28:12 -- nvmf/common.sh@470 -- # waitforlisten 3248086 00:29:13.364 16:28:12 -- common/autotest_common.sh@819 -- # '[' -z 3248086 ']' 00:29:13.364 16:28:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.364 16:28:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.364 16:28:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.364 16:28:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.364 16:28:12 -- common/autotest_common.sh@10 -- # set +x 00:29:13.364 16:28:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:13.625 [2024-04-23 16:28:12.301330] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:13.625 [2024-04-23 16:28:12.301468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.625 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.625 [2024-04-23 16:28:12.443265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.625 [2024-04-23 16:28:12.537356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.625 [2024-04-23 16:28:12.537545] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.625 [2024-04-23 16:28:12.537559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.625 [2024-04-23 16:28:12.537570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.625 [2024-04-23 16:28:12.537607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.215 16:28:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:14.215 16:28:13 -- common/autotest_common.sh@852 -- # return 0 00:29:14.215 16:28:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:14.215 16:28:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 16:28:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.215 16:28:13 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 [2024-04-23 16:28:13.059811] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 null0 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 19c5ff3f96b24e12b308dea9118844b0 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.215 [2024-04-23 16:28:13.103969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.215 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.215 16:28:13 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:14.215 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.215 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.563 nvme0n1 00:29:14.563 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.563 16:28:13 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.563 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.563 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.563 [ 00:29:14.563 { 00:29:14.563 "name": "nvme0n1", 00:29:14.563 "aliases": [ 00:29:14.563 "19c5ff3f-96b2-4e12-b308-dea9118844b0" 00:29:14.563 ], 00:29:14.563 "product_name": "NVMe disk", 00:29:14.563 "block_size": 512, 00:29:14.563 "num_blocks": 2097152, 00:29:14.563 "uuid": "19c5ff3f-96b2-4e12-b308-dea9118844b0", 00:29:14.563 "assigned_rate_limits": { 00:29:14.563 "rw_ios_per_sec": 0, 00:29:14.563 "rw_mbytes_per_sec": 0, 00:29:14.563 "r_mbytes_per_sec": 0, 00:29:14.563 "w_mbytes_per_sec": 0 00:29:14.563 }, 00:29:14.563 "claimed": false, 00:29:14.563 "zoned": false, 00:29:14.563 "supported_io_types": { 00:29:14.563 "read": true, 00:29:14.563 "write": true, 00:29:14.563 "unmap": false, 00:29:14.563 "write_zeroes": true, 00:29:14.563 "flush": true, 00:29:14.563 "reset": true, 00:29:14.563 "compare": true, 00:29:14.563 "compare_and_write": true, 00:29:14.563 "abort": true, 00:29:14.563 "nvme_admin": true, 00:29:14.563 "nvme_io": true 00:29:14.563 }, 00:29:14.563 "driver_specific": { 00:29:14.563 "nvme": [ 00:29:14.563 { 00:29:14.563 "trid": { 00:29:14.563 "trtype": "TCP", 00:29:14.563 "adrfam": "IPv4", 00:29:14.563 "traddr": "10.0.0.2", 00:29:14.563 "trsvcid": "4420", 00:29:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.563 }, 00:29:14.563 "ctrlr_data": { 00:29:14.563 "cntlid": 1, 00:29:14.563 "vendor_id": "0x8086", 00:29:14.563 "model_number": "SPDK bdev Controller", 00:29:14.563 "serial_number": "00000000000000000000", 00:29:14.563 "firmware_revision": "24.01.1", 00:29:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.563 "oacs": { 00:29:14.563 "security": 0, 00:29:14.563 "format": 0, 00:29:14.563 "firmware": 0, 00:29:14.563 "ns_manage": 0 00:29:14.563 }, 00:29:14.563 "multi_ctrlr": true, 00:29:14.563 "ana_reporting": false 00:29:14.563 }, 00:29:14.563 "vs": { 00:29:14.563 "nvme_version": "1.3" 00:29:14.563 }, 00:29:14.563 "ns_data": { 00:29:14.563 "id": 1, 00:29:14.563 "can_share": true 00:29:14.563 } 00:29:14.563 } 00:29:14.563 ], 00:29:14.563 "mp_policy": "active_passive" 00:29:14.563 } 00:29:14.563 } 00:29:14.563 ] 00:29:14.563 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.563 16:28:13 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:14.563 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.563 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.563 [2024-04-23 16:28:13.357461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:14.563 [2024-04-23 16:28:13.357552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003bc0 (9): Bad file descriptor 00:29:14.563 [2024-04-23 16:28:13.489746] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [ 00:29:14.839 { 00:29:14.839 "name": "nvme0n1", 00:29:14.839 "aliases": [ 00:29:14.839 "19c5ff3f-96b2-4e12-b308-dea9118844b0" 00:29:14.839 ], 00:29:14.839 "product_name": "NVMe disk", 00:29:14.839 "block_size": 512, 00:29:14.839 "num_blocks": 2097152, 00:29:14.839 "uuid": "19c5ff3f-96b2-4e12-b308-dea9118844b0", 00:29:14.839 "assigned_rate_limits": { 00:29:14.839 "rw_ios_per_sec": 0, 00:29:14.839 "rw_mbytes_per_sec": 0, 00:29:14.839 "r_mbytes_per_sec": 0, 00:29:14.839 "w_mbytes_per_sec": 0 00:29:14.839 }, 00:29:14.839 "claimed": false, 00:29:14.839 "zoned": false, 00:29:14.839 "supported_io_types": { 00:29:14.839 "read": true, 00:29:14.839 "write": true, 00:29:14.839 "unmap": false, 00:29:14.839 "write_zeroes": true, 00:29:14.839 "flush": true, 00:29:14.839 "reset": true, 00:29:14.839 "compare": true, 00:29:14.839 "compare_and_write": true, 00:29:14.839 "abort": true, 00:29:14.839 "nvme_admin": true, 00:29:14.839 "nvme_io": true 00:29:14.839 }, 00:29:14.839 "driver_specific": { 00:29:14.839 "nvme": [ 00:29:14.839 { 00:29:14.839 "trid": { 00:29:14.839 "trtype": "TCP", 00:29:14.839 "adrfam": "IPv4", 00:29:14.839 "traddr": "10.0.0.2", 00:29:14.839 "trsvcid": "4420", 00:29:14.839 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.839 }, 00:29:14.839 "ctrlr_data": { 00:29:14.839 "cntlid": 2, 00:29:14.839 "vendor_id": "0x8086", 00:29:14.839 "model_number": "SPDK bdev Controller", 00:29:14.839 "serial_number": "00000000000000000000", 00:29:14.839 "firmware_revision": "24.01.1", 00:29:14.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.839 "oacs": { 00:29:14.839 "security": 0, 00:29:14.839 "format": 0, 00:29:14.839 "firmware": 0, 00:29:14.839 "ns_manage": 0 00:29:14.839 }, 00:29:14.839 "multi_ctrlr": true, 00:29:14.839 "ana_reporting": false 00:29:14.839 }, 00:29:14.839 "vs": { 00:29:14.839 "nvme_version": "1.3" 00:29:14.839 }, 00:29:14.839 "ns_data": { 00:29:14.839 "id": 1, 00:29:14.839 "can_share": true 00:29:14.839 } 00:29:14.839 } 00:29:14.839 ], 00:29:14.839 "mp_policy": "active_passive" 00:29:14.839 } 00:29:14.839 } 00:29:14.839 ] 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@53 -- # mktemp 00:29:14.839 16:28:13 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gAzU4uwyXJ 00:29:14.839 16:28:13 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:14.839 16:28:13 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gAzU4uwyXJ 00:29:14.839 16:28:13 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [2024-04-23 16:28:13.545614] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:14.839 [2024-04-23 16:28:13.545764] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gAzU4uwyXJ 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gAzU4uwyXJ 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [2024-04-23 16:28:13.561602] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:14.839 nvme0n1 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 [ 00:29:14.839 { 00:29:14.839 "name": "nvme0n1", 00:29:14.839 "aliases": [ 00:29:14.839 "19c5ff3f-96b2-4e12-b308-dea9118844b0" 00:29:14.839 ], 00:29:14.839 "product_name": "NVMe disk", 00:29:14.839 "block_size": 512, 00:29:14.839 "num_blocks": 2097152, 00:29:14.839 "uuid": "19c5ff3f-96b2-4e12-b308-dea9118844b0", 00:29:14.839 "assigned_rate_limits": { 00:29:14.839 "rw_ios_per_sec": 0, 00:29:14.839 "rw_mbytes_per_sec": 0, 00:29:14.839 "r_mbytes_per_sec": 0, 00:29:14.839 "w_mbytes_per_sec": 0 00:29:14.839 }, 00:29:14.839 "claimed": false, 00:29:14.839 "zoned": false, 00:29:14.839 "supported_io_types": { 00:29:14.839 "read": true, 00:29:14.839 "write": true, 00:29:14.839 "unmap": false, 00:29:14.839 "write_zeroes": true, 00:29:14.839 "flush": true, 00:29:14.839 "reset": true, 00:29:14.839 "compare": true, 00:29:14.839 "compare_and_write": true, 00:29:14.839 "abort": true, 00:29:14.839 "nvme_admin": true, 00:29:14.839 "nvme_io": true 00:29:14.839 }, 00:29:14.839 "driver_specific": { 00:29:14.839 "nvme": [ 00:29:14.839 { 00:29:14.839 "trid": { 00:29:14.839 "trtype": "TCP", 00:29:14.839 "adrfam": "IPv4", 00:29:14.839 "traddr": "10.0.0.2", 00:29:14.839 "trsvcid": "4421", 00:29:14.839 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:14.839 }, 00:29:14.839 "ctrlr_data": { 00:29:14.839 "cntlid": 3, 00:29:14.839 "vendor_id": "0x8086", 00:29:14.839 "model_number": "SPDK bdev Controller", 00:29:14.839 "serial_number": "00000000000000000000", 00:29:14.839 "firmware_revision": "24.01.1", 00:29:14.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.839 "oacs": { 00:29:14.839 "security": 0, 00:29:14.839 "format": 0, 00:29:14.839 "firmware": 0, 00:29:14.839 "ns_manage": 0 00:29:14.839 }, 00:29:14.839 "multi_ctrlr": true, 00:29:14.839 "ana_reporting": false 00:29:14.839 }, 00:29:14.839 "vs": { 00:29:14.839 "nvme_version": "1.3" 00:29:14.839 }, 00:29:14.839 "ns_data": { 00:29:14.839 "id": 1, 00:29:14.839 "can_share": true 00:29:14.839 } 00:29:14.839 } 00:29:14.839 ], 00:29:14.839 "mp_policy": "active_passive" 00:29:14.839 } 00:29:14.839 } 00:29:14.839 ] 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.839 16:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.839 16:28:13 -- common/autotest_common.sh@10 -- # set +x 00:29:14.839 16:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.839 16:28:13 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.gAzU4uwyXJ 00:29:14.839 16:28:13 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:14.839 16:28:13 -- host/async_init.sh@78 -- # nvmftestfini 00:29:14.839 16:28:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:14.839 16:28:13 -- nvmf/common.sh@116 -- # sync 00:29:14.839 16:28:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:14.839 16:28:13 -- nvmf/common.sh@119 -- # set +e 00:29:14.839 16:28:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:14.839 16:28:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:14.839 rmmod nvme_tcp 00:29:14.839 rmmod nvme_fabrics 00:29:14.839 rmmod nvme_keyring 00:29:14.839 16:28:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:14.839 16:28:13 -- nvmf/common.sh@123 -- # set -e 00:29:14.839 16:28:13 -- nvmf/common.sh@124 -- # return 0 00:29:14.839 16:28:13 -- nvmf/common.sh@477 -- # '[' -n 3248086 ']' 00:29:14.839 16:28:13 -- nvmf/common.sh@478 -- # killprocess 3248086 00:29:14.839 16:28:13 -- common/autotest_common.sh@926 -- # '[' -z 3248086 ']' 00:29:14.839 16:28:13 -- common/autotest_common.sh@930 -- # kill -0 3248086 00:29:14.839 16:28:13 -- common/autotest_common.sh@931 -- # uname 00:29:14.839 16:28:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.839 16:28:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3248086 00:29:15.100 16:28:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:15.100 16:28:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:15.100 16:28:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3248086' 00:29:15.100 killing process with pid 3248086 00:29:15.100 16:28:13 -- common/autotest_common.sh@945 -- # kill 3248086 00:29:15.100 16:28:13 -- common/autotest_common.sh@950 -- # wait 3248086 00:29:15.361 16:28:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:15.361 16:28:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:15.361 16:28:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:15.361 16:28:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.361 16:28:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:15.361 16:28:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.361 16:28:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.361 16:28:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.903 16:28:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:17.903 00:29:17.903 real 0m9.788s 00:29:17.903 user 0m3.552s 00:29:17.903 sys 0m4.584s 00:29:17.903 16:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.903 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:29:17.903 ************************************ 00:29:17.903 END TEST nvmf_async_init 00:29:17.903 ************************************ 00:29:17.903 16:28:16 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:17.903 16:28:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:17.903 16:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.903 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:29:17.903 ************************************ 00:29:17.903 START TEST dma 00:29:17.903 ************************************ 00:29:17.903 16:28:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:17.903 * Looking for test storage... 00:29:17.903 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:17.903 16:28:16 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.903 16:28:16 -- nvmf/common.sh@7 -- # uname -s 00:29:17.903 16:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.903 16:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.903 16:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.903 16:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.903 16:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.903 16:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.903 16:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.903 16:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.903 16:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.903 16:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.903 16:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:17.903 16:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:17.903 16:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.903 16:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.903 16:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:17.903 16:28:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:17.903 16:28:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.903 16:28:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.903 16:28:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.903 16:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.903 16:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.903 16:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.903 16:28:16 -- paths/export.sh@5 -- # export PATH 00:29:17.903 16:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.903 16:28:16 -- nvmf/common.sh@46 -- # : 0 00:29:17.903 16:28:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:17.903 16:28:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:17.903 16:28:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:17.903 16:28:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.903 16:28:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.903 16:28:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:17.903 16:28:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:17.903 16:28:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:17.903 16:28:16 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:17.903 16:28:16 -- host/dma.sh@13 -- # exit 0 00:29:17.903 00:29:17.903 real 0m0.087s 00:29:17.903 user 0m0.041s 00:29:17.903 sys 0m0.051s 00:29:17.903 16:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.903 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:29:17.903 ************************************ 00:29:17.903 END TEST dma 00:29:17.903 ************************************ 00:29:17.903 16:28:16 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:17.903 16:28:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:17.903 16:28:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.903 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:29:17.903 ************************************ 00:29:17.903 START TEST nvmf_identify 00:29:17.903 ************************************ 00:29:17.903 16:28:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:17.903 * Looking for test storage... 00:29:17.903 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:17.903 16:28:16 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.903 16:28:16 -- nvmf/common.sh@7 -- # uname -s 00:29:17.903 16:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.903 16:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.903 16:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.903 16:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.903 16:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.903 16:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.903 16:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.903 16:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.903 16:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.903 16:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.903 16:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:17.903 16:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:17.903 16:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.903 16:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.903 16:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:17.903 16:28:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:17.903 16:28:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.903 16:28:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.903 16:28:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.903 16:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.903 16:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.904 16:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.904 16:28:16 -- paths/export.sh@5 -- # export PATH 00:29:17.904 16:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.904 16:28:16 -- nvmf/common.sh@46 -- # : 0 00:29:17.904 16:28:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:17.904 16:28:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:17.904 16:28:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:17.904 16:28:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.904 16:28:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.904 16:28:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:17.904 16:28:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:17.904 16:28:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:17.904 16:28:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.904 16:28:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.904 16:28:16 -- host/identify.sh@14 -- # nvmftestinit 00:29:17.904 16:28:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:17.904 16:28:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.904 16:28:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:17.904 16:28:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:17.904 16:28:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:17.904 16:28:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.904 16:28:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.904 16:28:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.904 16:28:16 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:17.904 16:28:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:17.904 16:28:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:17.904 16:28:16 -- common/autotest_common.sh@10 -- # set +x 00:29:23.179 16:28:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:23.179 16:28:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:23.179 16:28:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:23.179 16:28:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:23.179 16:28:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:23.179 16:28:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:23.179 16:28:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:23.179 16:28:21 -- nvmf/common.sh@294 -- # net_devs=() 00:29:23.179 16:28:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:23.179 16:28:21 -- nvmf/common.sh@295 -- # e810=() 00:29:23.179 16:28:21 -- nvmf/common.sh@295 -- # local -ga e810 00:29:23.179 16:28:21 -- nvmf/common.sh@296 -- # x722=() 00:29:23.179 16:28:21 -- nvmf/common.sh@296 -- # local -ga x722 00:29:23.179 16:28:21 -- nvmf/common.sh@297 -- # mlx=() 00:29:23.179 16:28:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:23.179 16:28:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.179 16:28:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:23.179 16:28:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:23.179 16:28:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:23.179 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:23.179 16:28:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:23.179 16:28:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:23.179 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:23.179 16:28:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:23.179 16:28:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.179 16:28:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.179 16:28:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:23.179 Found net devices under 0000:27:00.0: cvl_0_0 00:29:23.179 16:28:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.179 16:28:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:23.179 16:28:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.179 16:28:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.179 16:28:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:23.179 Found net devices under 0000:27:00.1: cvl_0_1 00:29:23.179 16:28:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.179 16:28:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:23.179 16:28:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:23.179 16:28:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:23.179 16:28:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.179 16:28:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.179 16:28:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.179 16:28:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:23.179 16:28:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.179 16:28:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.179 16:28:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:23.179 16:28:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.180 16:28:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.180 16:28:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:23.180 16:28:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:23.180 16:28:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.180 16:28:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.180 16:28:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.180 16:28:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.180 16:28:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:23.180 16:28:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.180 16:28:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.180 16:28:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.180 16:28:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:23.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:29:23.180 00:29:23.180 --- 10.0.0.2 ping statistics --- 00:29:23.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.180 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:29:23.180 16:28:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:29:23.180 00:29:23.180 --- 10.0.0.1 ping statistics --- 00:29:23.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.180 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:29:23.180 16:28:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.180 16:28:22 -- nvmf/common.sh@410 -- # return 0 00:29:23.180 16:28:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:23.180 16:28:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.180 16:28:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:23.180 16:28:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:23.180 16:28:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.180 16:28:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:23.180 16:28:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:23.180 16:28:22 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:23.180 16:28:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:23.180 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 16:28:22 -- host/identify.sh@19 -- # nvmfpid=3252350 00:29:23.180 16:28:22 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.180 16:28:22 -- host/identify.sh@23 -- # waitforlisten 3252350 00:29:23.180 16:28:22 -- common/autotest_common.sh@819 -- # '[' -z 3252350 ']' 00:29:23.180 16:28:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.180 16:28:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.180 16:28:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.180 16:28:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.180 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 16:28:22 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:23.441 [2024-04-23 16:28:22.162131] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:23.441 [2024-04-23 16:28:22.162238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.441 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.441 [2024-04-23 16:28:22.283127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.704 [2024-04-23 16:28:22.378827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:23.704 [2024-04-23 16:28:22.379000] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.704 [2024-04-23 16:28:22.379015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.704 [2024-04-23 16:28:22.379025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.704 [2024-04-23 16:28:22.379086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.704 [2024-04-23 16:28:22.379200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.704 [2024-04-23 16:28:22.379292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.704 [2024-04-23 16:28:22.379304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.964 16:28:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:23.964 16:28:22 -- common/autotest_common.sh@852 -- # return 0 00:29:23.964 16:28:22 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.964 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.964 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:23.964 [2024-04-23 16:28:22.881188] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.964 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.964 16:28:22 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:23.964 16:28:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:23.964 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 16:28:22 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.224 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 Malloc0 00:29:24.224 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:22 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.224 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:22 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:24.224 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:22 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.224 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 [2024-04-23 16:28:22.989668] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.224 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:22 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.224 16:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:22 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 16:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:23 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:24.224 16:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.224 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:29:24.224 [2024-04-23 16:28:23.005425] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:24.224 [ 00:29:24.224 { 00:29:24.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.224 "subtype": "Discovery", 00:29:24.224 "listen_addresses": [ 00:29:24.224 { 00:29:24.224 "transport": "TCP", 00:29:24.224 "trtype": "TCP", 00:29:24.224 "adrfam": "IPv4", 00:29:24.224 "traddr": "10.0.0.2", 00:29:24.224 "trsvcid": "4420" 00:29:24.224 } 00:29:24.224 ], 00:29:24.224 "allow_any_host": true, 00:29:24.224 "hosts": [] 00:29:24.224 }, 00:29:24.224 { 00:29:24.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.224 "subtype": "NVMe", 00:29:24.224 "listen_addresses": [ 00:29:24.224 { 00:29:24.224 "transport": "TCP", 00:29:24.224 "trtype": "TCP", 00:29:24.224 "adrfam": "IPv4", 00:29:24.224 "traddr": "10.0.0.2", 00:29:24.224 "trsvcid": "4420" 00:29:24.224 } 00:29:24.224 ], 00:29:24.224 "allow_any_host": true, 00:29:24.224 "hosts": [], 00:29:24.224 "serial_number": "SPDK00000000000001", 00:29:24.224 "model_number": "SPDK bdev Controller", 00:29:24.224 "max_namespaces": 32, 00:29:24.224 "min_cntlid": 1, 00:29:24.224 "max_cntlid": 65519, 00:29:24.224 "namespaces": [ 00:29:24.224 { 00:29:24.224 "nsid": 1, 00:29:24.224 "bdev_name": "Malloc0", 00:29:24.224 "name": "Malloc0", 00:29:24.224 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:24.224 "eui64": "ABCDEF0123456789", 00:29:24.224 "uuid": "181ac6fc-22f1-4467-8fef-6cd27e646289" 00:29:24.224 } 00:29:24.224 ] 00:29:24.224 } 00:29:24.224 ] 00:29:24.224 16:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.224 16:28:23 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:24.224 [2024-04-23 16:28:23.057584] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:24.224 [2024-04-23 16:28:23.057687] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252662 ] 00:29:24.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.224 [2024-04-23 16:28:23.112737] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:24.224 [2024-04-23 16:28:23.112820] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.224 [2024-04-23 16:28:23.112830] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.224 [2024-04-23 16:28:23.112849] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.224 [2024-04-23 16:28:23.112866] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.224 [2024-04-23 16:28:23.113573] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:24.224 [2024-04-23 16:28:23.113614] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:24.224 [2024-04-23 16:28:23.119639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.224 [2024-04-23 16:28:23.119657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.224 [2024-04-23 16:28:23.119664] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.224 [2024-04-23 16:28:23.119670] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.224 [2024-04-23 16:28:23.119716] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.119724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.119732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.224 [2024-04-23 16:28:23.119754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.224 [2024-04-23 16:28:23.119777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.224 [2024-04-23 16:28:23.127644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.224 [2024-04-23 16:28:23.127661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.224 [2024-04-23 16:28:23.127667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127674] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.224 [2024-04-23 16:28:23.127688] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.224 [2024-04-23 16:28:23.127700] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:24.224 [2024-04-23 16:28:23.127707] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:24.224 [2024-04-23 16:28:23.127724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.224 [2024-04-23 16:28:23.127757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.224 [2024-04-23 16:28:23.127776] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.224 [2024-04-23 16:28:23.127914] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.224 [2024-04-23 16:28:23.127922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.224 [2024-04-23 16:28:23.127933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.224 [2024-04-23 16:28:23.127952] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:24.224 [2024-04-23 16:28:23.127964] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:24.224 [2024-04-23 16:28:23.127974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.127985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.224 [2024-04-23 16:28:23.127997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.224 [2024-04-23 16:28:23.128009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.224 [2024-04-23 16:28:23.128225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.224 [2024-04-23 16:28:23.128233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.224 [2024-04-23 16:28:23.128237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.224 [2024-04-23 16:28:23.128242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.224 [2024-04-23 16:28:23.128248] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:24.225 [2024-04-23 16:28:23.128258] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.225 [2024-04-23 16:28:23.128349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.225 [2024-04-23 16:28:23.128360] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.225 [2024-04-23 16:28:23.128461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.225 [2024-04-23 16:28:23.128468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.225 [2024-04-23 16:28:23.128472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.225 [2024-04-23 16:28:23.128486] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.225 [2024-04-23 16:28:23.128517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.225 [2024-04-23 16:28:23.128529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.225 [2024-04-23 16:28:23.128623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.225 [2024-04-23 16:28:23.128635] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.225 [2024-04-23 16:28:23.128639] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128644] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.225 [2024-04-23 16:28:23.128650] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:24.225 [2024-04-23 16:28:23.128658] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128668] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128774] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:24.225 [2024-04-23 16:28:23.128782] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.225 [2024-04-23 16:28:23.128815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.225 [2024-04-23 16:28:23.128826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.225 [2024-04-23 16:28:23.128942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.225 [2024-04-23 16:28:23.128949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.225 [2024-04-23 16:28:23.128953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128958] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.225 [2024-04-23 16:28:23.128964] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.225 [2024-04-23 16:28:23.128976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.128987] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.225 [2024-04-23 16:28:23.128996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.225 [2024-04-23 16:28:23.129011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.225 [2024-04-23 16:28:23.129222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.225 [2024-04-23 16:28:23.129228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.225 [2024-04-23 16:28:23.129232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.129237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.225 [2024-04-23 16:28:23.129243] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.225 [2024-04-23 16:28:23.129249] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:24.225 [2024-04-23 16:28:23.129261] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:24.225 [2024-04-23 16:28:23.129273] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.225 [2024-04-23 16:28:23.129286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.129292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.129298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.225 [2024-04-23 16:28:23.129308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.225 [2024-04-23 16:28:23.129318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.225 [2024-04-23 16:28:23.129484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.225 [2024-04-23 16:28:23.129491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.225 [2024-04-23 16:28:23.129495] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.129501] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:24.225 [2024-04-23 16:28:23.129508] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.225 [2024-04-23 16:28:23.129520] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.225 [2024-04-23 16:28:23.129527] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.169833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.486 [2024-04-23 16:28:23.169850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.486 [2024-04-23 16:28:23.169854] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.169861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.486 [2024-04-23 16:28:23.169877] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:24.486 [2024-04-23 16:28:23.169885] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:24.486 [2024-04-23 16:28:23.169894] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:24.486 [2024-04-23 16:28:23.169901] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:24.486 [2024-04-23 16:28:23.169909] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:24.486 [2024-04-23 16:28:23.169916] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:24.486 [2024-04-23 16:28:23.169926] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.486 [2024-04-23 16:28:23.169938] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.169943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.169949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.486 [2024-04-23 16:28:23.169962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.486 [2024-04-23 16:28:23.169978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.486 [2024-04-23 16:28:23.170235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.486 [2024-04-23 16:28:23.170242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.486 [2024-04-23 16:28:23.170249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.486 [2024-04-23 16:28:23.170263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.486 [2024-04-23 16:28:23.170284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.486 [2024-04-23 16:28:23.170291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170295] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:24.486 [2024-04-23 16:28:23.170307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.486 [2024-04-23 16:28:23.170313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:24.486 [2024-04-23 16:28:23.170329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.486 [2024-04-23 16:28:23.170336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170339] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.486 [2024-04-23 16:28:23.170344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.170351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.487 [2024-04-23 16:28:23.170357] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.487 [2024-04-23 16:28:23.170367] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.487 [2024-04-23 16:28:23.170375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.170396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.487 [2024-04-23 16:28:23.170409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.487 [2024-04-23 16:28:23.170414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:24.487 [2024-04-23 16:28:23.170419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:24.487 [2024-04-23 16:28:23.170424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.487 [2024-04-23 16:28:23.170429] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.487 [2024-04-23 16:28:23.170695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.170702] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.170706] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170711] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.487 [2024-04-23 16:28:23.170718] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:24.487 [2024-04-23 16:28:23.170727] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:24.487 [2024-04-23 16:28:23.170741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.170765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.487 [2024-04-23 16:28:23.170776] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.487 [2024-04-23 16:28:23.170915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.487 [2024-04-23 16:28:23.170925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.487 [2024-04-23 16:28:23.170930] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170936] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:24.487 [2024-04-23 16:28:23.170942] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.487 [2024-04-23 16:28:23.170953] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.170958] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.171124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.171128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.487 [2024-04-23 16:28:23.171151] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:24.487 [2024-04-23 16:28:23.171187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171200] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.171209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.487 [2024-04-23 16:28:23.171217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.171237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.487 [2024-04-23 16:28:23.171249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.487 [2024-04-23 16:28:23.171255] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.487 [2024-04-23 16:28:23.171442] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.487 [2024-04-23 16:28:23.171449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.487 [2024-04-23 16:28:23.171454] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171462] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=1024, cccid=4 00:29:24.487 [2024-04-23 16:28:23.171468] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=1024 00:29:24.487 [2024-04-23 16:28:23.171478] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171483] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.171498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.171502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.171507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.487 [2024-04-23 16:28:23.211824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.211840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.211845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.211850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.487 [2024-04-23 16:28:23.211874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.211880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.211886] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.211898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.487 [2024-04-23 16:28:23.211921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.487 [2024-04-23 16:28:23.212055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.487 [2024-04-23 16:28:23.212061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.487 [2024-04-23 16:28:23.212066] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.212071] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=3072, cccid=4 00:29:24.487 [2024-04-23 16:28:23.212077] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=3072 00:29:24.487 [2024-04-23 16:28:23.212257] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.212262] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.252825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.252839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.252843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.252848] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.487 [2024-04-23 16:28:23.252867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.252872] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.252878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.487 [2024-04-23 16:28:23.252890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.487 [2024-04-23 16:28:23.252907] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.487 [2024-04-23 16:28:23.253023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.487 [2024-04-23 16:28:23.253030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.487 [2024-04-23 16:28:23.253034] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.253039] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8, cccid=4 00:29:24.487 [2024-04-23 16:28:23.253045] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8 00:29:24.487 [2024-04-23 16:28:23.253056] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.253061] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.293826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.487 [2024-04-23 16:28:23.293840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.487 [2024-04-23 16:28:23.293845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.487 [2024-04-23 16:28:23.293850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.487 ===================================================== 00:29:24.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:24.487 ===================================================== 00:29:24.487 Controller Capabilities/Features 00:29:24.487 ================================ 00:29:24.487 Vendor ID: 0000 00:29:24.487 Subsystem Vendor ID: 0000 00:29:24.487 Serial Number: .................... 00:29:24.487 Model Number: ........................................ 00:29:24.487 Firmware Version: 24.01.1 00:29:24.487 Recommended Arb Burst: 0 00:29:24.487 IEEE OUI Identifier: 00 00 00 00:29:24.487 Multi-path I/O 00:29:24.487 May have multiple subsystem ports: No 00:29:24.487 May have multiple controllers: No 00:29:24.487 Associated with SR-IOV VF: No 00:29:24.487 Max Data Transfer Size: 131072 00:29:24.487 Max Number of Namespaces: 0 00:29:24.487 Max Number of I/O Queues: 1024 00:29:24.487 NVMe Specification Version (VS): 1.3 00:29:24.488 NVMe Specification Version (Identify): 1.3 00:29:24.488 Maximum Queue Entries: 128 00:29:24.488 Contiguous Queues Required: Yes 00:29:24.488 Arbitration Mechanisms Supported 00:29:24.488 Weighted Round Robin: Not Supported 00:29:24.488 Vendor Specific: Not Supported 00:29:24.488 Reset Timeout: 15000 ms 00:29:24.488 Doorbell Stride: 4 bytes 00:29:24.488 NVM Subsystem Reset: Not Supported 00:29:24.488 Command Sets Supported 00:29:24.488 NVM Command Set: Supported 00:29:24.488 Boot Partition: Not Supported 00:29:24.488 Memory Page Size Minimum: 4096 bytes 00:29:24.488 Memory Page Size Maximum: 4096 bytes 00:29:24.488 Persistent Memory Region: Not Supported 00:29:24.488 Optional Asynchronous Events Supported 00:29:24.488 Namespace Attribute Notices: Not Supported 00:29:24.488 Firmware Activation Notices: Not Supported 00:29:24.488 ANA Change Notices: Not Supported 00:29:24.488 PLE Aggregate Log Change Notices: Not Supported 00:29:24.488 LBA Status Info Alert Notices: Not Supported 00:29:24.488 EGE Aggregate Log Change Notices: Not Supported 00:29:24.488 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.488 Zone Descriptor Change Notices: Not Supported 00:29:24.488 Discovery Log Change Notices: Supported 00:29:24.488 Controller Attributes 00:29:24.488 128-bit Host Identifier: Not Supported 00:29:24.488 Non-Operational Permissive Mode: Not Supported 00:29:24.488 NVM Sets: Not Supported 00:29:24.488 Read Recovery Levels: Not Supported 00:29:24.488 Endurance Groups: Not Supported 00:29:24.488 Predictable Latency Mode: Not Supported 00:29:24.488 Traffic Based Keep ALive: Not Supported 00:29:24.488 Namespace Granularity: Not Supported 00:29:24.488 SQ Associations: Not Supported 00:29:24.488 UUID List: Not Supported 00:29:24.488 Multi-Domain Subsystem: Not Supported 00:29:24.488 Fixed Capacity Management: Not Supported 00:29:24.488 Variable Capacity Management: Not Supported 00:29:24.488 Delete Endurance Group: Not Supported 00:29:24.488 Delete NVM Set: Not Supported 00:29:24.488 Extended LBA Formats Supported: Not Supported 00:29:24.488 Flexible Data Placement Supported: Not Supported 00:29:24.488 00:29:24.488 Controller Memory Buffer Support 00:29:24.488 ================================ 00:29:24.488 Supported: No 00:29:24.488 00:29:24.488 Persistent Memory Region Support 00:29:24.488 ================================ 00:29:24.488 Supported: No 00:29:24.488 00:29:24.488 Admin Command Set Attributes 00:29:24.488 ============================ 00:29:24.488 Security Send/Receive: Not Supported 00:29:24.488 Format NVM: Not Supported 00:29:24.488 Firmware Activate/Download: Not Supported 00:29:24.488 Namespace Management: Not Supported 00:29:24.488 Device Self-Test: Not Supported 00:29:24.488 Directives: Not Supported 00:29:24.488 NVMe-MI: Not Supported 00:29:24.488 Virtualization Management: Not Supported 00:29:24.488 Doorbell Buffer Config: Not Supported 00:29:24.488 Get LBA Status Capability: Not Supported 00:29:24.488 Command & Feature Lockdown Capability: Not Supported 00:29:24.488 Abort Command Limit: 1 00:29:24.488 Async Event Request Limit: 4 00:29:24.488 Number of Firmware Slots: N/A 00:29:24.488 Firmware Slot 1 Read-Only: N/A 00:29:24.488 Firmware Activation Without Reset: N/A 00:29:24.488 Multiple Update Detection Support: N/A 00:29:24.488 Firmware Update Granularity: No Information Provided 00:29:24.488 Per-Namespace SMART Log: No 00:29:24.488 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.488 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:24.488 Command Effects Log Page: Not Supported 00:29:24.488 Get Log Page Extended Data: Supported 00:29:24.488 Telemetry Log Pages: Not Supported 00:29:24.488 Persistent Event Log Pages: Not Supported 00:29:24.488 Supported Log Pages Log Page: May Support 00:29:24.488 Commands Supported & Effects Log Page: Not Supported 00:29:24.488 Feature Identifiers & Effects Log Page:May Support 00:29:24.488 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.488 Data Area 4 for Telemetry Log: Not Supported 00:29:24.488 Error Log Page Entries Supported: 128 00:29:24.488 Keep Alive: Not Supported 00:29:24.488 00:29:24.488 NVM Command Set Attributes 00:29:24.488 ========================== 00:29:24.488 Submission Queue Entry Size 00:29:24.488 Max: 1 00:29:24.488 Min: 1 00:29:24.488 Completion Queue Entry Size 00:29:24.488 Max: 1 00:29:24.488 Min: 1 00:29:24.488 Number of Namespaces: 0 00:29:24.488 Compare Command: Not Supported 00:29:24.488 Write Uncorrectable Command: Not Supported 00:29:24.488 Dataset Management Command: Not Supported 00:29:24.488 Write Zeroes Command: Not Supported 00:29:24.488 Set Features Save Field: Not Supported 00:29:24.488 Reservations: Not Supported 00:29:24.488 Timestamp: Not Supported 00:29:24.488 Copy: Not Supported 00:29:24.488 Volatile Write Cache: Not Present 00:29:24.488 Atomic Write Unit (Normal): 1 00:29:24.488 Atomic Write Unit (PFail): 1 00:29:24.488 Atomic Compare & Write Unit: 1 00:29:24.488 Fused Compare & Write: Supported 00:29:24.488 Scatter-Gather List 00:29:24.488 SGL Command Set: Supported 00:29:24.488 SGL Keyed: Supported 00:29:24.488 SGL Bit Bucket Descriptor: Not Supported 00:29:24.488 SGL Metadata Pointer: Not Supported 00:29:24.488 Oversized SGL: Not Supported 00:29:24.488 SGL Metadata Address: Not Supported 00:29:24.488 SGL Offset: Supported 00:29:24.488 Transport SGL Data Block: Not Supported 00:29:24.488 Replay Protected Memory Block: Not Supported 00:29:24.488 00:29:24.488 Firmware Slot Information 00:29:24.488 ========================= 00:29:24.488 Active slot: 0 00:29:24.488 00:29:24.488 00:29:24.488 Error Log 00:29:24.488 ========= 00:29:24.488 00:29:24.488 Active Namespaces 00:29:24.488 ================= 00:29:24.488 Discovery Log Page 00:29:24.488 ================== 00:29:24.488 Generation Counter: 2 00:29:24.488 Number of Records: 2 00:29:24.488 Record Format: 0 00:29:24.488 00:29:24.488 Discovery Log Entry 0 00:29:24.488 ---------------------- 00:29:24.488 Transport Type: 3 (TCP) 00:29:24.488 Address Family: 1 (IPv4) 00:29:24.488 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:24.488 Entry Flags: 00:29:24.488 Duplicate Returned Information: 1 00:29:24.488 Explicit Persistent Connection Support for Discovery: 1 00:29:24.488 Transport Requirements: 00:29:24.488 Secure Channel: Not Required 00:29:24.488 Port ID: 0 (0x0000) 00:29:24.488 Controller ID: 65535 (0xffff) 00:29:24.488 Admin Max SQ Size: 128 00:29:24.488 Transport Service Identifier: 4420 00:29:24.488 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:24.488 Transport Address: 10.0.0.2 00:29:24.488 Discovery Log Entry 1 00:29:24.488 ---------------------- 00:29:24.488 Transport Type: 3 (TCP) 00:29:24.488 Address Family: 1 (IPv4) 00:29:24.488 Subsystem Type: 2 (NVM Subsystem) 00:29:24.488 Entry Flags: 00:29:24.488 Duplicate Returned Information: 0 00:29:24.488 Explicit Persistent Connection Support for Discovery: 0 00:29:24.488 Transport Requirements: 00:29:24.488 Secure Channel: Not Required 00:29:24.488 Port ID: 0 (0x0000) 00:29:24.488 Controller ID: 65535 (0xffff) 00:29:24.488 Admin Max SQ Size: 128 00:29:24.488 Transport Service Identifier: 4420 00:29:24.488 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:24.488 Transport Address: 10.0.0.2 [2024-04-23 16:28:23.293975] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:24.488 [2024-04-23 16:28:23.293991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.488 [2024-04-23 16:28:23.294001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.488 [2024-04-23 16:28:23.294008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.488 [2024-04-23 16:28:23.294014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.488 [2024-04-23 16:28:23.294027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.488 [2024-04-23 16:28:23.294033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.488 [2024-04-23 16:28:23.294041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.488 [2024-04-23 16:28:23.294053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.488 [2024-04-23 16:28:23.294070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.488 [2024-04-23 16:28:23.294278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.488 [2024-04-23 16:28:23.294285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.488 [2024-04-23 16:28:23.294289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.489 [2024-04-23 16:28:23.294305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.489 [2024-04-23 16:28:23.294326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.489 [2024-04-23 16:28:23.294340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.489 [2024-04-23 16:28:23.294461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.489 [2024-04-23 16:28:23.294468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.489 [2024-04-23 16:28:23.294472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.489 [2024-04-23 16:28:23.294482] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:24.489 [2024-04-23 16:28:23.294489] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:24.489 [2024-04-23 16:28:23.294500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.294510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.489 [2024-04-23 16:28:23.294522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.489 [2024-04-23 16:28:23.294535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.489 [2024-04-23 16:28:23.294625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.489 [2024-04-23 16:28:23.298640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.489 [2024-04-23 16:28:23.298646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.298651] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.489 [2024-04-23 16:28:23.298665] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.298669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.298674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.489 [2024-04-23 16:28:23.298684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.489 [2024-04-23 16:28:23.298697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.489 [2024-04-23 16:28:23.298796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.489 [2024-04-23 16:28:23.298803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.489 [2024-04-23 16:28:23.298807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.489 [2024-04-23 16:28:23.298811] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.489 [2024-04-23 16:28:23.298821] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:29:24.489 00:29:24.489 16:28:23 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:24.489 [2024-04-23 16:28:23.366168] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:24.489 [2024-04-23 16:28:23.366251] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252670 ] 00:29:24.489 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.489 [2024-04-23 16:28:23.415430] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:24.489 [2024-04-23 16:28:23.415506] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:24.489 [2024-04-23 16:28:23.415518] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:24.489 [2024-04-23 16:28:23.415536] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:24.489 [2024-04-23 16:28:23.415547] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:24.489 [2024-04-23 16:28:23.415947] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:24.489 [2024-04-23 16:28:23.415975] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613000001fc0 0 00:29:24.751 [2024-04-23 16:28:23.426639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:24.751 [2024-04-23 16:28:23.426658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:24.751 [2024-04-23 16:28:23.426664] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:24.751 [2024-04-23 16:28:23.426669] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:24.751 [2024-04-23 16:28:23.426708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.426718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.426725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.751 [2024-04-23 16:28:23.426747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:24.751 [2024-04-23 16:28:23.426771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.751 [2024-04-23 16:28:23.434641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.751 [2024-04-23 16:28:23.434654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.751 [2024-04-23 16:28:23.434660] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.434666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.751 [2024-04-23 16:28:23.434681] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:24.751 [2024-04-23 16:28:23.434694] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:24.751 [2024-04-23 16:28:23.434701] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:24.751 [2024-04-23 16:28:23.434717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.434724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.434730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.751 [2024-04-23 16:28:23.434744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.751 [2024-04-23 16:28:23.434762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.751 [2024-04-23 16:28:23.434972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.751 [2024-04-23 16:28:23.434980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.751 [2024-04-23 16:28:23.434990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.434995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.751 [2024-04-23 16:28:23.435002] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:24.751 [2024-04-23 16:28:23.435012] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:24.751 [2024-04-23 16:28:23.435021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.751 [2024-04-23 16:28:23.435042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.751 [2024-04-23 16:28:23.435054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.751 [2024-04-23 16:28:23.435177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.751 [2024-04-23 16:28:23.435186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.751 [2024-04-23 16:28:23.435190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.751 [2024-04-23 16:28:23.435202] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:24.751 [2024-04-23 16:28:23.435211] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:24.751 [2024-04-23 16:28:23.435220] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.751 [2024-04-23 16:28:23.435243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.751 [2024-04-23 16:28:23.435253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.751 [2024-04-23 16:28:23.435356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.751 [2024-04-23 16:28:23.435363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.751 [2024-04-23 16:28:23.435367] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.751 [2024-04-23 16:28:23.435378] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:24.751 [2024-04-23 16:28:23.435388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.751 [2024-04-23 16:28:23.435408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.751 [2024-04-23 16:28:23.435418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.751 [2024-04-23 16:28:23.435570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.751 [2024-04-23 16:28:23.435579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.751 [2024-04-23 16:28:23.435583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.751 [2024-04-23 16:28:23.435588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.435594] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:24.752 [2024-04-23 16:28:23.435601] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:24.752 [2024-04-23 16:28:23.435611] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:24.752 [2024-04-23 16:28:23.435718] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:24.752 [2024-04-23 16:28:23.435723] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:24.752 [2024-04-23 16:28:23.435734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.435739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.435745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.435754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.752 [2024-04-23 16:28:23.435765] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.752 [2024-04-23 16:28:23.435915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.752 [2024-04-23 16:28:23.435921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.752 [2024-04-23 16:28:23.435925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.435929] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.435935] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:24.752 [2024-04-23 16:28:23.435948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.435955] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.435960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.435969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.752 [2024-04-23 16:28:23.435979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.752 [2024-04-23 16:28:23.436082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.752 [2024-04-23 16:28:23.436089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.752 [2024-04-23 16:28:23.436093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436098] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.436104] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:24.752 [2024-04-23 16:28:23.436110] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436119] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:24.752 [2024-04-23 16:28:23.436127] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.752 [2024-04-23 16:28:23.436172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.752 [2024-04-23 16:28:23.436308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.752 [2024-04-23 16:28:23.436315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.752 [2024-04-23 16:28:23.436320] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436325] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=0 00:29:24.752 [2024-04-23 16:28:23.436334] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.752 [2024-04-23 16:28:23.436473] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436479] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436582] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.752 [2024-04-23 16:28:23.436589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.752 [2024-04-23 16:28:23.436594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.436611] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:24.752 [2024-04-23 16:28:23.436618] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:24.752 [2024-04-23 16:28:23.436624] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:24.752 [2024-04-23 16:28:23.436635] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:24.752 [2024-04-23 16:28:23.436646] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:24.752 [2024-04-23 16:28:23.436652] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436662] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436693] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.752 [2024-04-23 16:28:23.436705] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.752 [2024-04-23 16:28:23.436797] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.752 [2024-04-23 16:28:23.436804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.752 [2024-04-23 16:28:23.436808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.436826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.752 [2024-04-23 16:28:23.436853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.752 [2024-04-23 16:28:23.436881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.752 [2024-04-23 16:28:23.436904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.752 [2024-04-23 16:28:23.436925] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436934] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.436942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.436952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.752 [2024-04-23 16:28:23.436962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.752 [2024-04-23 16:28:23.436975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:29:24.752 [2024-04-23 16:28:23.436981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:29:24.752 [2024-04-23 16:28:23.436986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:29:24.752 [2024-04-23 16:28:23.436990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.752 [2024-04-23 16:28:23.436996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.752 [2024-04-23 16:28:23.437169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.752 [2024-04-23 16:28:23.437177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.752 [2024-04-23 16:28:23.437181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.437186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.752 [2024-04-23 16:28:23.437192] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:24.752 [2024-04-23 16:28:23.437200] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.437213] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.437224] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:24.752 [2024-04-23 16:28:23.437232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.437236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.752 [2024-04-23 16:28:23.437241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.437250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:24.753 [2024-04-23 16:28:23.437260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.753 [2024-04-23 16:28:23.437359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.437366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.437370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.437374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.437426] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.437438] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.437449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.437453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.437460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.437471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.753 [2024-04-23 16:28:23.437482] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.753 [2024-04-23 16:28:23.437598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.753 [2024-04-23 16:28:23.437604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.753 [2024-04-23 16:28:23.437610] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.437614] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:24.753 [2024-04-23 16:28:23.437620] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.753 [2024-04-23 16:28:23.437777] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.437782] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.482637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.482651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.482656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.482661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.482683] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:24.753 [2024-04-23 16:28:23.482697] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.482708] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.482719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.482724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.482729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.482742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.753 [2024-04-23 16:28:23.482758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.753 [2024-04-23 16:28:23.482890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.753 [2024-04-23 16:28:23.482899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.753 [2024-04-23 16:28:23.482903] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.482908] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:24.753 [2024-04-23 16:28:23.482913] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.753 [2024-04-23 16:28:23.483042] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.483047] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.523845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.523860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.523864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.523870] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.523891] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.523902] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.523915] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.523920] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.523925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.523937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.753 [2024-04-23 16:28:23.523956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.753 [2024-04-23 16:28:23.524068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.753 [2024-04-23 16:28:23.524075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.753 [2024-04-23 16:28:23.524079] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.524083] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=4 00:29:24.753 [2024-04-23 16:28:23.524089] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.753 [2024-04-23 16:28:23.524210] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.524215] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.564948] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.564961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.564965] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.564970] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.564985] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.564994] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.565004] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.565012] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.565020] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.565027] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:24.753 [2024-04-23 16:28:23.565033] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:24.753 [2024-04-23 16:28:23.565039] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:24.753 [2024-04-23 16:28:23.565062] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.565084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.753 [2024-04-23 16:28:23.565092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.565112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.753 [2024-04-23 16:28:23.565128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.753 [2024-04-23 16:28:23.565135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.753 [2024-04-23 16:28:23.565384] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.565392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.565399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.565413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.565421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.565425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.565439] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565447] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.753 [2024-04-23 16:28:23.565455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.753 [2024-04-23 16:28:23.565465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.753 [2024-04-23 16:28:23.565581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.753 [2024-04-23 16:28:23.565588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.753 [2024-04-23 16:28:23.565592] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565597] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.753 [2024-04-23 16:28:23.565606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.753 [2024-04-23 16:28:23.565610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.565615] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.565622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.569638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.754 [2024-04-23 16:28:23.569790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.569798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.569802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.754 [2024-04-23 16:28:23.569816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.569836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.569846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.754 [2024-04-23 16:28:23.569939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.569945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.569949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.754 [2024-04-23 16:28:23.569972] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.569982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.569995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.570004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.570024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.570033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570046] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.570054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.570064] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:24.754 [2024-04-23 16:28:23.570082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.754 [2024-04-23 16:28:23.570095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:29:24.754 [2024-04-23 16:28:23.570100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:29:24.754 [2024-04-23 16:28:23.570105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:29:24.754 [2024-04-23 16:28:23.570110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:24.754 [2024-04-23 16:28:23.570286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.754 [2024-04-23 16:28:23.570293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.754 [2024-04-23 16:28:23.570298] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570303] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=8192, cccid=5 00:29:24.754 [2024-04-23 16:28:23.570309] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=8192 00:29:24.754 [2024-04-23 16:28:23.570515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570520] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.754 [2024-04-23 16:28:23.570534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.754 [2024-04-23 16:28:23.570538] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570542] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=4 00:29:24.754 [2024-04-23 16:28:23.570547] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:24.754 [2024-04-23 16:28:23.570558] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570562] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570570] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.754 [2024-04-23 16:28:23.570576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.754 [2024-04-23 16:28:23.570582] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570587] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=512, cccid=6 00:29:24.754 [2024-04-23 16:28:23.570592] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x613000001fc0): expected_datao=0, payload_size=512 00:29:24.754 [2024-04-23 16:28:23.570599] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570603] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:24.754 [2024-04-23 16:28:23.570615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:24.754 [2024-04-23 16:28:23.570618] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570622] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613000001fc0): datao=0, datal=4096, cccid=7 00:29:24.754 [2024-04-23 16:28:23.570631] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x613000001fc0): expected_datao=0, payload_size=4096 00:29:24.754 [2024-04-23 16:28:23.570639] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570643] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.570684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.570688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570693] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x613000001fc0 00:29:24.754 [2024-04-23 16:28:23.570710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.570716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.570720] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570724] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x613000001fc0 00:29:24.754 [2024-04-23 16:28:23.570736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.570742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.570746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x613000001fc0 00:29:24.754 [2024-04-23 16:28:23.570759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.754 [2024-04-23 16:28:23.570766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.754 [2024-04-23 16:28:23.570769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.754 [2024-04-23 16:28:23.570774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:24.754 ===================================================== 00:29:24.754 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.754 ===================================================== 00:29:24.754 Controller Capabilities/Features 00:29:24.754 ================================ 00:29:24.754 Vendor ID: 8086 00:29:24.754 Subsystem Vendor ID: 8086 00:29:24.754 Serial Number: SPDK00000000000001 00:29:24.754 Model Number: SPDK bdev Controller 00:29:24.754 Firmware Version: 24.01.1 00:29:24.754 Recommended Arb Burst: 6 00:29:24.754 IEEE OUI Identifier: e4 d2 5c 00:29:24.754 Multi-path I/O 00:29:24.754 May have multiple subsystem ports: Yes 00:29:24.754 May have multiple controllers: Yes 00:29:24.754 Associated with SR-IOV VF: No 00:29:24.754 Max Data Transfer Size: 131072 00:29:24.754 Max Number of Namespaces: 32 00:29:24.754 Max Number of I/O Queues: 127 00:29:24.754 NVMe Specification Version (VS): 1.3 00:29:24.754 NVMe Specification Version (Identify): 1.3 00:29:24.754 Maximum Queue Entries: 128 00:29:24.754 Contiguous Queues Required: Yes 00:29:24.754 Arbitration Mechanisms Supported 00:29:24.754 Weighted Round Robin: Not Supported 00:29:24.754 Vendor Specific: Not Supported 00:29:24.754 Reset Timeout: 15000 ms 00:29:24.754 Doorbell Stride: 4 bytes 00:29:24.754 NVM Subsystem Reset: Not Supported 00:29:24.754 Command Sets Supported 00:29:24.754 NVM Command Set: Supported 00:29:24.754 Boot Partition: Not Supported 00:29:24.754 Memory Page Size Minimum: 4096 bytes 00:29:24.754 Memory Page Size Maximum: 4096 bytes 00:29:24.754 Persistent Memory Region: Not Supported 00:29:24.754 Optional Asynchronous Events Supported 00:29:24.754 Namespace Attribute Notices: Supported 00:29:24.754 Firmware Activation Notices: Not Supported 00:29:24.754 ANA Change Notices: Not Supported 00:29:24.754 PLE Aggregate Log Change Notices: Not Supported 00:29:24.755 LBA Status Info Alert Notices: Not Supported 00:29:24.755 EGE Aggregate Log Change Notices: Not Supported 00:29:24.755 Normal NVM Subsystem Shutdown event: Not Supported 00:29:24.755 Zone Descriptor Change Notices: Not Supported 00:29:24.755 Discovery Log Change Notices: Not Supported 00:29:24.755 Controller Attributes 00:29:24.755 128-bit Host Identifier: Supported 00:29:24.755 Non-Operational Permissive Mode: Not Supported 00:29:24.755 NVM Sets: Not Supported 00:29:24.755 Read Recovery Levels: Not Supported 00:29:24.755 Endurance Groups: Not Supported 00:29:24.755 Predictable Latency Mode: Not Supported 00:29:24.755 Traffic Based Keep ALive: Not Supported 00:29:24.755 Namespace Granularity: Not Supported 00:29:24.755 SQ Associations: Not Supported 00:29:24.755 UUID List: Not Supported 00:29:24.755 Multi-Domain Subsystem: Not Supported 00:29:24.755 Fixed Capacity Management: Not Supported 00:29:24.755 Variable Capacity Management: Not Supported 00:29:24.755 Delete Endurance Group: Not Supported 00:29:24.755 Delete NVM Set: Not Supported 00:29:24.755 Extended LBA Formats Supported: Not Supported 00:29:24.755 Flexible Data Placement Supported: Not Supported 00:29:24.755 00:29:24.755 Controller Memory Buffer Support 00:29:24.755 ================================ 00:29:24.755 Supported: No 00:29:24.755 00:29:24.755 Persistent Memory Region Support 00:29:24.755 ================================ 00:29:24.755 Supported: No 00:29:24.755 00:29:24.755 Admin Command Set Attributes 00:29:24.755 ============================ 00:29:24.755 Security Send/Receive: Not Supported 00:29:24.755 Format NVM: Not Supported 00:29:24.755 Firmware Activate/Download: Not Supported 00:29:24.755 Namespace Management: Not Supported 00:29:24.755 Device Self-Test: Not Supported 00:29:24.755 Directives: Not Supported 00:29:24.755 NVMe-MI: Not Supported 00:29:24.755 Virtualization Management: Not Supported 00:29:24.755 Doorbell Buffer Config: Not Supported 00:29:24.755 Get LBA Status Capability: Not Supported 00:29:24.755 Command & Feature Lockdown Capability: Not Supported 00:29:24.755 Abort Command Limit: 4 00:29:24.755 Async Event Request Limit: 4 00:29:24.755 Number of Firmware Slots: N/A 00:29:24.755 Firmware Slot 1 Read-Only: N/A 00:29:24.755 Firmware Activation Without Reset: N/A 00:29:24.755 Multiple Update Detection Support: N/A 00:29:24.755 Firmware Update Granularity: No Information Provided 00:29:24.755 Per-Namespace SMART Log: No 00:29:24.755 Asymmetric Namespace Access Log Page: Not Supported 00:29:24.755 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:24.755 Command Effects Log Page: Supported 00:29:24.755 Get Log Page Extended Data: Supported 00:29:24.755 Telemetry Log Pages: Not Supported 00:29:24.755 Persistent Event Log Pages: Not Supported 00:29:24.755 Supported Log Pages Log Page: May Support 00:29:24.755 Commands Supported & Effects Log Page: Not Supported 00:29:24.755 Feature Identifiers & Effects Log Page:May Support 00:29:24.755 NVMe-MI Commands & Effects Log Page: May Support 00:29:24.755 Data Area 4 for Telemetry Log: Not Supported 00:29:24.755 Error Log Page Entries Supported: 128 00:29:24.755 Keep Alive: Supported 00:29:24.755 Keep Alive Granularity: 10000 ms 00:29:24.755 00:29:24.755 NVM Command Set Attributes 00:29:24.755 ========================== 00:29:24.755 Submission Queue Entry Size 00:29:24.755 Max: 64 00:29:24.755 Min: 64 00:29:24.755 Completion Queue Entry Size 00:29:24.755 Max: 16 00:29:24.755 Min: 16 00:29:24.755 Number of Namespaces: 32 00:29:24.755 Compare Command: Supported 00:29:24.755 Write Uncorrectable Command: Not Supported 00:29:24.755 Dataset Management Command: Supported 00:29:24.755 Write Zeroes Command: Supported 00:29:24.755 Set Features Save Field: Not Supported 00:29:24.755 Reservations: Supported 00:29:24.755 Timestamp: Not Supported 00:29:24.755 Copy: Supported 00:29:24.755 Volatile Write Cache: Present 00:29:24.755 Atomic Write Unit (Normal): 1 00:29:24.755 Atomic Write Unit (PFail): 1 00:29:24.755 Atomic Compare & Write Unit: 1 00:29:24.755 Fused Compare & Write: Supported 00:29:24.755 Scatter-Gather List 00:29:24.755 SGL Command Set: Supported 00:29:24.755 SGL Keyed: Supported 00:29:24.755 SGL Bit Bucket Descriptor: Not Supported 00:29:24.755 SGL Metadata Pointer: Not Supported 00:29:24.755 Oversized SGL: Not Supported 00:29:24.755 SGL Metadata Address: Not Supported 00:29:24.755 SGL Offset: Supported 00:29:24.755 Transport SGL Data Block: Not Supported 00:29:24.755 Replay Protected Memory Block: Not Supported 00:29:24.755 00:29:24.755 Firmware Slot Information 00:29:24.755 ========================= 00:29:24.755 Active slot: 1 00:29:24.755 Slot 1 Firmware Revision: 24.01.1 00:29:24.755 00:29:24.755 00:29:24.755 Commands Supported and Effects 00:29:24.755 ============================== 00:29:24.755 Admin Commands 00:29:24.755 -------------- 00:29:24.755 Get Log Page (02h): Supported 00:29:24.755 Identify (06h): Supported 00:29:24.755 Abort (08h): Supported 00:29:24.755 Set Features (09h): Supported 00:29:24.755 Get Features (0Ah): Supported 00:29:24.755 Asynchronous Event Request (0Ch): Supported 00:29:24.755 Keep Alive (18h): Supported 00:29:24.755 I/O Commands 00:29:24.755 ------------ 00:29:24.755 Flush (00h): Supported LBA-Change 00:29:24.755 Write (01h): Supported LBA-Change 00:29:24.755 Read (02h): Supported 00:29:24.755 Compare (05h): Supported 00:29:24.755 Write Zeroes (08h): Supported LBA-Change 00:29:24.755 Dataset Management (09h): Supported LBA-Change 00:29:24.755 Copy (19h): Supported LBA-Change 00:29:24.755 Unknown (79h): Supported LBA-Change 00:29:24.755 Unknown (7Ah): Supported 00:29:24.755 00:29:24.755 Error Log 00:29:24.755 ========= 00:29:24.755 00:29:24.755 Arbitration 00:29:24.755 =========== 00:29:24.755 Arbitration Burst: 1 00:29:24.755 00:29:24.755 Power Management 00:29:24.755 ================ 00:29:24.755 Number of Power States: 1 00:29:24.755 Current Power State: Power State #0 00:29:24.755 Power State #0: 00:29:24.755 Max Power: 0.00 W 00:29:24.755 Non-Operational State: Operational 00:29:24.755 Entry Latency: Not Reported 00:29:24.755 Exit Latency: Not Reported 00:29:24.755 Relative Read Throughput: 0 00:29:24.755 Relative Read Latency: 0 00:29:24.755 Relative Write Throughput: 0 00:29:24.755 Relative Write Latency: 0 00:29:24.755 Idle Power: Not Reported 00:29:24.755 Active Power: Not Reported 00:29:24.755 Non-Operational Permissive Mode: Not Supported 00:29:24.755 00:29:24.755 Health Information 00:29:24.755 ================== 00:29:24.755 Critical Warnings: 00:29:24.755 Available Spare Space: OK 00:29:24.755 Temperature: OK 00:29:24.755 Device Reliability: OK 00:29:24.755 Read Only: No 00:29:24.755 Volatile Memory Backup: OK 00:29:24.755 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:24.755 Temperature Threshold: [2024-04-23 16:28:23.570909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.755 [2024-04-23 16:28:23.570914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.755 [2024-04-23 16:28:23.570921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613000001fc0) 00:29:24.755 [2024-04-23 16:28:23.570930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.755 [2024-04-23 16:28:23.570942] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:29:24.755 [2024-04-23 16:28:23.571089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.755 [2024-04-23 16:28:23.571097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.755 [2024-04-23 16:28:23.571101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.755 [2024-04-23 16:28:23.571108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x613000001fc0 00:29:24.755 [2024-04-23 16:28:23.571148] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:24.755 [2024-04-23 16:28:23.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.755 [2024-04-23 16:28:23.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.755 [2024-04-23 16:28:23.571177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.755 [2024-04-23 16:28:23.571183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.755 [2024-04-23 16:28:23.571193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.755 [2024-04-23 16:28:23.571200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.755 [2024-04-23 16:28:23.571205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.571214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.571227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.571443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.571450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.571455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.571469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.571489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.571503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.571612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.571619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.571622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.571637] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:24.756 [2024-04-23 16:28:23.571644] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:24.756 [2024-04-23 16:28:23.571654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.571675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.571685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.571831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.571838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.571841] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.571858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.571874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.571884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.571976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.571982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.571986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.571990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572273] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572449] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.756 [2024-04-23 16:28:23.572806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.756 [2024-04-23 16:28:23.572815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.756 [2024-04-23 16:28:23.572823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.756 [2024-04-23 16:28:23.572832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.756 [2024-04-23 16:28:23.572933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.756 [2024-04-23 16:28:23.572940] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.756 [2024-04-23 16:28:23.572944] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.572948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.572957] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.572962] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.572966] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.757 [2024-04-23 16:28:23.572973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.757 [2024-04-23 16:28:23.572982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.757 [2024-04-23 16:28:23.573103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.757 [2024-04-23 16:28:23.573109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.757 [2024-04-23 16:28:23.573113] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573117] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.573130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.757 [2024-04-23 16:28:23.573146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.757 [2024-04-23 16:28:23.573156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.757 [2024-04-23 16:28:23.573300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.757 [2024-04-23 16:28:23.573306] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.757 [2024-04-23 16:28:23.573310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573314] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.573324] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.757 [2024-04-23 16:28:23.573343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.757 [2024-04-23 16:28:23.573352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.757 [2024-04-23 16:28:23.573522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.757 [2024-04-23 16:28:23.573529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.757 [2024-04-23 16:28:23.573533] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.573547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.573555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.757 [2024-04-23 16:28:23.573563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.757 [2024-04-23 16:28:23.573573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.757 [2024-04-23 16:28:23.577639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.757 [2024-04-23 16:28:23.577650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.757 [2024-04-23 16:28:23.577654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.577659] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.577670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.577675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.577679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613000001fc0) 00:29:24.757 [2024-04-23 16:28:23.577688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.757 [2024-04-23 16:28:23.577700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:29:24.757 [2024-04-23 16:28:23.577848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:24.757 [2024-04-23 16:28:23.577855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:24.757 [2024-04-23 16:28:23.577859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:24.757 [2024-04-23 16:28:23.577863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x613000001fc0 00:29:24.757 [2024-04-23 16:28:23.577874] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:24.757 0 Kelvin (-273 Celsius) 00:29:24.757 Available Spare: 0% 00:29:24.757 Available Spare Threshold: 0% 00:29:24.757 Life Percentage Used: 0% 00:29:24.757 Data Units Read: 0 00:29:24.757 Data Units Written: 0 00:29:24.757 Host Read Commands: 0 00:29:24.757 Host Write Commands: 0 00:29:24.757 Controller Busy Time: 0 minutes 00:29:24.757 Power Cycles: 0 00:29:24.757 Power On Hours: 0 hours 00:29:24.757 Unsafe Shutdowns: 0 00:29:24.757 Unrecoverable Media Errors: 0 00:29:24.757 Lifetime Error Log Entries: 0 00:29:24.757 Warning Temperature Time: 0 minutes 00:29:24.757 Critical Temperature Time: 0 minutes 00:29:24.757 00:29:24.757 Number of Queues 00:29:24.757 ================ 00:29:24.757 Number of I/O Submission Queues: 127 00:29:24.757 Number of I/O Completion Queues: 127 00:29:24.757 00:29:24.757 Active Namespaces 00:29:24.757 ================= 00:29:24.757 Namespace ID:1 00:29:24.757 Error Recovery Timeout: Unlimited 00:29:24.757 Command Set Identifier: NVM (00h) 00:29:24.757 Deallocate: Supported 00:29:24.757 Deallocated/Unwritten Error: Not Supported 00:29:24.757 Deallocated Read Value: Unknown 00:29:24.757 Deallocate in Write Zeroes: Not Supported 00:29:24.757 Deallocated Guard Field: 0xFFFF 00:29:24.757 Flush: Supported 00:29:24.757 Reservation: Supported 00:29:24.757 Namespace Sharing Capabilities: Multiple Controllers 00:29:24.757 Size (in LBAs): 131072 (0GiB) 00:29:24.757 Capacity (in LBAs): 131072 (0GiB) 00:29:24.757 Utilization (in LBAs): 131072 (0GiB) 00:29:24.757 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:24.757 EUI64: ABCDEF0123456789 00:29:24.757 UUID: 181ac6fc-22f1-4467-8fef-6cd27e646289 00:29:24.757 Thin Provisioning: Not Supported 00:29:24.757 Per-NS Atomic Units: Yes 00:29:24.757 Atomic Boundary Size (Normal): 0 00:29:24.757 Atomic Boundary Size (PFail): 0 00:29:24.757 Atomic Boundary Offset: 0 00:29:24.757 Maximum Single Source Range Length: 65535 00:29:24.757 Maximum Copy Length: 65535 00:29:24.757 Maximum Source Range Count: 1 00:29:24.757 NGUID/EUI64 Never Reused: No 00:29:24.757 Namespace Write Protected: No 00:29:24.757 Number of LBA Formats: 1 00:29:24.757 Current LBA Format: LBA Format #00 00:29:24.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:24.757 00:29:24.757 16:28:23 -- host/identify.sh@51 -- # sync 00:29:24.757 16:28:23 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.757 16:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.757 16:28:23 -- common/autotest_common.sh@10 -- # set +x 00:29:24.757 16:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.757 16:28:23 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:24.757 16:28:23 -- host/identify.sh@56 -- # nvmftestfini 00:29:24.757 16:28:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:24.757 16:28:23 -- nvmf/common.sh@116 -- # sync 00:29:24.757 16:28:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:24.757 16:28:23 -- nvmf/common.sh@119 -- # set +e 00:29:24.757 16:28:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:24.757 16:28:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:24.757 rmmod nvme_tcp 00:29:24.757 rmmod nvme_fabrics 00:29:24.757 rmmod nvme_keyring 00:29:25.016 16:28:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.016 16:28:23 -- nvmf/common.sh@123 -- # set -e 00:29:25.016 16:28:23 -- nvmf/common.sh@124 -- # return 0 00:29:25.016 16:28:23 -- nvmf/common.sh@477 -- # '[' -n 3252350 ']' 00:29:25.016 16:28:23 -- nvmf/common.sh@478 -- # killprocess 3252350 00:29:25.016 16:28:23 -- common/autotest_common.sh@926 -- # '[' -z 3252350 ']' 00:29:25.016 16:28:23 -- common/autotest_common.sh@930 -- # kill -0 3252350 00:29:25.016 16:28:23 -- common/autotest_common.sh@931 -- # uname 00:29:25.016 16:28:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.016 16:28:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3252350 00:29:25.016 16:28:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.016 16:28:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.016 16:28:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3252350' 00:29:25.016 killing process with pid 3252350 00:29:25.016 16:28:23 -- common/autotest_common.sh@945 -- # kill 3252350 00:29:25.016 [2024-04-23 16:28:23.729532] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:25.016 16:28:23 -- common/autotest_common.sh@950 -- # wait 3252350 00:29:25.587 16:28:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.587 16:28:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.587 16:28:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.587 16:28:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.587 16:28:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.587 16:28:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.587 16:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.587 16:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.683 16:28:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:27.683 00:29:27.683 real 0m9.840s 00:29:27.683 user 0m8.523s 00:29:27.683 sys 0m4.632s 00:29:27.683 16:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.683 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:29:27.683 ************************************ 00:29:27.683 END TEST nvmf_identify 00:29:27.683 ************************************ 00:29:27.683 16:28:26 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.683 16:28:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:27.683 16:28:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.683 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:29:27.683 ************************************ 00:29:27.683 START TEST nvmf_perf 00:29:27.683 ************************************ 00:29:27.683 16:28:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:27.683 * Looking for test storage... 00:29:27.683 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:27.683 16:28:26 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.683 16:28:26 -- nvmf/common.sh@7 -- # uname -s 00:29:27.683 16:28:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.683 16:28:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.683 16:28:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.683 16:28:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.683 16:28:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.683 16:28:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.683 16:28:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.683 16:28:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.683 16:28:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.683 16:28:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.683 16:28:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:27.683 16:28:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:27.683 16:28:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.683 16:28:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.683 16:28:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:27.683 16:28:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:27.683 16:28:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.683 16:28:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.683 16:28:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.683 16:28:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.683 16:28:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.683 16:28:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.683 16:28:26 -- paths/export.sh@5 -- # export PATH 00:29:27.683 16:28:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.683 16:28:26 -- nvmf/common.sh@46 -- # : 0 00:29:27.683 16:28:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:27.683 16:28:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:27.683 16:28:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:27.683 16:28:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.683 16:28:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.683 16:28:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:27.683 16:28:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:27.683 16:28:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:27.683 16:28:26 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:27.683 16:28:26 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:27.683 16:28:26 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:29:27.683 16:28:26 -- host/perf.sh@17 -- # nvmftestinit 00:29:27.683 16:28:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:27.683 16:28:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.683 16:28:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:27.683 16:28:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:27.683 16:28:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:27.683 16:28:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.683 16:28:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.683 16:28:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.683 16:28:26 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:29:27.683 16:28:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:27.683 16:28:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:27.683 16:28:26 -- common/autotest_common.sh@10 -- # set +x 00:29:32.975 16:28:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:32.975 16:28:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:32.975 16:28:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:32.975 16:28:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:32.975 16:28:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:32.975 16:28:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:32.975 16:28:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:32.975 16:28:31 -- nvmf/common.sh@294 -- # net_devs=() 00:29:32.975 16:28:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:32.975 16:28:31 -- nvmf/common.sh@295 -- # e810=() 00:29:32.975 16:28:31 -- nvmf/common.sh@295 -- # local -ga e810 00:29:32.975 16:28:31 -- nvmf/common.sh@296 -- # x722=() 00:29:32.975 16:28:31 -- nvmf/common.sh@296 -- # local -ga x722 00:29:32.975 16:28:31 -- nvmf/common.sh@297 -- # mlx=() 00:29:32.975 16:28:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:32.975 16:28:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.975 16:28:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:32.975 16:28:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:32.975 16:28:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:32.975 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:32.975 16:28:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:32.975 16:28:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:32.975 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:32.975 16:28:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:32.975 16:28:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.975 16:28:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.975 16:28:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:32.975 Found net devices under 0000:27:00.0: cvl_0_0 00:29:32.975 16:28:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.975 16:28:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:32.975 16:28:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.975 16:28:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.975 16:28:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:32.975 Found net devices under 0000:27:00.1: cvl_0_1 00:29:32.975 16:28:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.975 16:28:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:32.975 16:28:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:32.975 16:28:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:32.975 16:28:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.975 16:28:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.975 16:28:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.975 16:28:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:32.975 16:28:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.975 16:28:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.975 16:28:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:32.975 16:28:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.975 16:28:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.975 16:28:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:32.975 16:28:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:32.975 16:28:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.975 16:28:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.235 16:28:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.235 16:28:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.235 16:28:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:33.235 16:28:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.235 16:28:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.235 16:28:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.235 16:28:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:33.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:29:33.235 00:29:33.235 --- 10.0.0.2 ping statistics --- 00:29:33.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.235 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:29:33.236 16:28:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.468 ms 00:29:33.236 00:29:33.236 --- 10.0.0.1 ping statistics --- 00:29:33.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.236 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:29:33.236 16:28:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.236 16:28:32 -- nvmf/common.sh@410 -- # return 0 00:29:33.236 16:28:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:33.236 16:28:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.236 16:28:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:33.236 16:28:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:33.236 16:28:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.236 16:28:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:33.236 16:28:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:33.236 16:28:32 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:33.236 16:28:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:33.236 16:28:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:33.236 16:28:32 -- common/autotest_common.sh@10 -- # set +x 00:29:33.236 16:28:32 -- nvmf/common.sh@469 -- # nvmfpid=3256849 00:29:33.236 16:28:32 -- nvmf/common.sh@470 -- # waitforlisten 3256849 00:29:33.236 16:28:32 -- common/autotest_common.sh@819 -- # '[' -z 3256849 ']' 00:29:33.236 16:28:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.236 16:28:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:33.236 16:28:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.236 16:28:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:33.236 16:28:32 -- common/autotest_common.sh@10 -- # set +x 00:29:33.236 16:28:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.497 [2024-04-23 16:28:32.214692] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:29:33.497 [2024-04-23 16:28:32.214799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.497 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.497 [2024-04-23 16:28:32.344283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.758 [2024-04-23 16:28:32.442495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:33.758 [2024-04-23 16:28:32.442706] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.758 [2024-04-23 16:28:32.442722] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.758 [2024-04-23 16:28:32.442733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.758 [2024-04-23 16:28:32.445660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.758 [2024-04-23 16:28:32.445682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.758 [2024-04-23 16:28:32.445798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.758 [2024-04-23 16:28:32.445808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.019 16:28:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:34.019 16:28:32 -- common/autotest_common.sh@852 -- # return 0 00:29:34.019 16:28:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:34.019 16:28:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:34.019 16:28:32 -- common/autotest_common.sh@10 -- # set +x 00:29:34.278 16:28:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.278 16:28:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:34.278 16:28:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:35.221 16:28:33 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:35.221 16:28:33 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:35.221 16:28:33 -- host/perf.sh@30 -- # local_nvme_trid=0000:03:00.0 00:29:35.221 16:28:33 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:35.482 16:28:34 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:35.482 16:28:34 -- host/perf.sh@33 -- # '[' -n 0000:03:00.0 ']' 00:29:35.482 16:28:34 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:35.482 16:28:34 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:35.482 16:28:34 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:35.482 [2024-04-23 16:28:34.298398] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.482 16:28:34 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.744 16:28:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:35.744 16:28:34 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.744 16:28:34 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:35.744 16:28:34 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:36.003 16:28:34 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.003 [2024-04-23 16:28:34.915769] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.261 16:28:34 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.261 16:28:35 -- host/perf.sh@52 -- # '[' -n 0000:03:00.0 ']' 00:29:36.261 16:28:35 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:29:36.261 16:28:35 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:36.261 16:28:35 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:29:37.641 Initializing NVMe Controllers 00:29:37.641 Attached to NVMe Controller at 0000:03:00.0 [1344:51c3] 00:29:37.641 Associating PCIE (0000:03:00.0) NSID 1 with lcore 0 00:29:37.641 Initialization complete. Launching workers. 00:29:37.641 ======================================================== 00:29:37.641 Latency(us) 00:29:37.641 Device Information : IOPS MiB/s Average min max 00:29:37.641 PCIE (0000:03:00.0) NSID 1 from core 0: 90938.45 355.23 351.55 71.44 5223.68 00:29:37.641 ======================================================== 00:29:37.641 Total : 90938.45 355.23 351.55 71.44 5223.68 00:29:37.641 00:29:37.641 16:28:36 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.641 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.545 Initializing NVMe Controllers 00:29:39.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.545 Initialization complete. Launching workers. 00:29:39.545 ======================================================== 00:29:39.545 Latency(us) 00:29:39.545 Device Information : IOPS MiB/s Average min max 00:29:39.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.00 0.27 14856.66 250.67 46381.29 00:29:39.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23109.37 7965.54 48019.67 00:29:39.545 ======================================================== 00:29:39.545 Total : 115.00 0.45 18085.98 250.67 48019.67 00:29:39.545 00:29:39.545 16:28:38 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.545 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.488 Initializing NVMe Controllers 00:29:40.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.488 Initialization complete. Launching workers. 00:29:40.488 ======================================================== 00:29:40.488 Latency(us) 00:29:40.488 Device Information : IOPS MiB/s Average min max 00:29:40.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10745.00 41.97 2979.27 350.66 9194.93 00:29:40.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3852.00 15.05 8363.47 5997.21 16419.40 00:29:40.488 ======================================================== 00:29:40.488 Total : 14597.00 57.02 4400.11 350.66 16419.40 00:29:40.488 00:29:40.488 16:28:39 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:29:40.488 16:28:39 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.026 Initializing NVMe Controllers 00:29:43.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.026 Controller IO queue size 128, less than required. 00:29:43.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.026 Controller IO queue size 128, less than required. 00:29:43.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.026 Initialization complete. Launching workers. 00:29:43.026 ======================================================== 00:29:43.026 Latency(us) 00:29:43.026 Device Information : IOPS MiB/s Average min max 00:29:43.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 871.44 217.86 153524.16 83482.29 213357.64 00:29:43.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.96 153.24 219636.81 85969.93 350831.27 00:29:43.026 ======================================================== 00:29:43.026 Total : 1484.40 371.10 180824.30 83482.29 350831.27 00:29:43.026 00:29:43.026 16:28:41 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:43.026 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.285 No valid NVMe controllers or AIO or URING devices found 00:29:43.285 Initializing NVMe Controllers 00:29:43.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.285 Controller IO queue size 128, less than required. 00:29:43.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.285 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:43.285 Controller IO queue size 128, less than required. 00:29:43.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.285 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:43.285 WARNING: Some requested NVMe devices were skipped 00:29:43.285 16:28:42 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:43.544 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.083 Initializing NVMe Controllers 00:29:46.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.083 Controller IO queue size 128, less than required. 00:29:46.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.083 Controller IO queue size 128, less than required. 00:29:46.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:46.083 Initialization complete. Launching workers. 00:29:46.083 00:29:46.083 ==================== 00:29:46.083 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:46.083 TCP transport: 00:29:46.083 polls: 44515 00:29:46.083 idle_polls: 13059 00:29:46.083 sock_completions: 31456 00:29:46.083 nvme_completions: 4194 00:29:46.083 submitted_requests: 6430 00:29:46.083 queued_requests: 1 00:29:46.083 00:29:46.083 ==================== 00:29:46.083 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:46.083 TCP transport: 00:29:46.083 polls: 47935 00:29:46.083 idle_polls: 15589 00:29:46.083 sock_completions: 32346 00:29:46.083 nvme_completions: 3591 00:29:46.083 submitted_requests: 5532 00:29:46.083 queued_requests: 1 00:29:46.083 ======================================================== 00:29:46.083 Latency(us) 00:29:46.083 Device Information : IOPS MiB/s Average min max 00:29:46.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1109.98 277.49 119066.51 48678.55 243398.39 00:29:46.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 959.75 239.94 136033.97 72769.12 203803.87 00:29:46.083 ======================================================== 00:29:46.083 Total : 2069.73 517.43 126934.47 48678.55 243398.39 00:29:46.083 00:29:46.083 16:28:44 -- host/perf.sh@66 -- # sync 00:29:46.083 16:28:44 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.083 16:28:44 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:46.083 16:28:44 -- host/perf.sh@71 -- # '[' -n 0000:03:00.0 ']' 00:29:46.083 16:28:44 -- host/perf.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:47.024 16:28:45 -- host/perf.sh@72 -- # ls_guid=ca613039-47a5-4244-af47-4ae675a1d7d9 00:29:47.024 16:28:45 -- host/perf.sh@73 -- # get_lvs_free_mb ca613039-47a5-4244-af47-4ae675a1d7d9 00:29:47.024 16:28:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ca613039-47a5-4244-af47-4ae675a1d7d9 00:29:47.024 16:28:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:47.024 16:28:45 -- common/autotest_common.sh@1345 -- # local fc 00:29:47.024 16:28:45 -- common/autotest_common.sh@1346 -- # local cs 00:29:47.024 16:28:45 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:47.024 16:28:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:47.024 { 00:29:47.024 "uuid": "ca613039-47a5-4244-af47-4ae675a1d7d9", 00:29:47.024 "name": "lvs_0", 00:29:47.024 "base_bdev": "Nvme0n1", 00:29:47.024 "total_data_clusters": 228704, 00:29:47.024 "free_clusters": 228704, 00:29:47.024 "block_size": 512, 00:29:47.024 "cluster_size": 4194304 00:29:47.024 } 00:29:47.024 ]' 00:29:47.024 16:28:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ca613039-47a5-4244-af47-4ae675a1d7d9") .free_clusters' 00:29:47.024 16:28:45 -- common/autotest_common.sh@1348 -- # fc=228704 00:29:47.024 16:28:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ca613039-47a5-4244-af47-4ae675a1d7d9") .cluster_size' 00:29:47.024 16:28:45 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:47.024 16:28:45 -- common/autotest_common.sh@1352 -- # free_mb=914816 00:29:47.024 16:28:45 -- common/autotest_common.sh@1353 -- # echo 914816 00:29:47.024 914816 00:29:47.024 16:28:45 -- host/perf.sh@77 -- # '[' 914816 -gt 20480 ']' 00:29:47.024 16:28:45 -- host/perf.sh@78 -- # free_mb=20480 00:29:47.024 16:28:45 -- host/perf.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ca613039-47a5-4244-af47-4ae675a1d7d9 lbd_0 20480 00:29:47.286 16:28:46 -- host/perf.sh@80 -- # lb_guid=20b12767-bb28-4597-8b43-5659a6a828cf 00:29:47.286 16:28:46 -- host/perf.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 20b12767-bb28-4597-8b43-5659a6a828cf lvs_n_0 00:29:47.860 16:28:46 -- host/perf.sh@83 -- # ls_nested_guid=a6173a08-92f9-4fa3-8cca-7830c2e3a3c4 00:29:47.860 16:28:46 -- host/perf.sh@84 -- # get_lvs_free_mb a6173a08-92f9-4fa3-8cca-7830c2e3a3c4 00:29:47.860 16:28:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a6173a08-92f9-4fa3-8cca-7830c2e3a3c4 00:29:47.860 16:28:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:47.860 16:28:46 -- common/autotest_common.sh@1345 -- # local fc 00:29:47.860 16:28:46 -- common/autotest_common.sh@1346 -- # local cs 00:29:47.860 16:28:46 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.124 16:28:46 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:48.124 { 00:29:48.124 "uuid": "ca613039-47a5-4244-af47-4ae675a1d7d9", 00:29:48.124 "name": "lvs_0", 00:29:48.124 "base_bdev": "Nvme0n1", 00:29:48.124 "total_data_clusters": 228704, 00:29:48.124 "free_clusters": 223584, 00:29:48.124 "block_size": 512, 00:29:48.124 "cluster_size": 4194304 00:29:48.124 }, 00:29:48.124 { 00:29:48.124 "uuid": "a6173a08-92f9-4fa3-8cca-7830c2e3a3c4", 00:29:48.124 "name": "lvs_n_0", 00:29:48.124 "base_bdev": "20b12767-bb28-4597-8b43-5659a6a828cf", 00:29:48.124 "total_data_clusters": 5114, 00:29:48.124 "free_clusters": 5114, 00:29:48.124 "block_size": 512, 00:29:48.124 "cluster_size": 4194304 00:29:48.124 } 00:29:48.124 ]' 00:29:48.124 16:28:46 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a6173a08-92f9-4fa3-8cca-7830c2e3a3c4") .free_clusters' 00:29:48.124 16:28:46 -- common/autotest_common.sh@1348 -- # fc=5114 00:29:48.124 16:28:46 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a6173a08-92f9-4fa3-8cca-7830c2e3a3c4") .cluster_size' 00:29:48.124 16:28:46 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:48.124 16:28:46 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:29:48.124 16:28:46 -- common/autotest_common.sh@1353 -- # echo 20456 00:29:48.124 20456 00:29:48.124 16:28:46 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:48.124 16:28:46 -- host/perf.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6173a08-92f9-4fa3-8cca-7830c2e3a3c4 lbd_nest_0 20456 00:29:48.383 16:28:47 -- host/perf.sh@88 -- # lb_nested_guid=a167d893-43dd-4f4e-972f-44e804455e38 00:29:48.383 16:28:47 -- host/perf.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.383 16:28:47 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:48.383 16:28:47 -- host/perf.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a167d893-43dd-4f4e-972f-44e804455e38 00:29:48.641 16:28:47 -- host/perf.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.641 16:28:47 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:48.641 16:28:47 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:48.642 16:28:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:48.642 16:28:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:48.642 16:28:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.642 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.866 Initializing NVMe Controllers 00:30:00.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.866 Initialization complete. Launching workers. 00:30:00.866 ======================================================== 00:30:00.866 Latency(us) 00:30:00.866 Device Information : IOPS MiB/s Average min max 00:30:00.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.68 0.02 20541.73 209.94 46156.73 00:30:00.866 ======================================================== 00:30:00.866 Total : 48.68 0.02 20541.73 209.94 46156.73 00:30:00.866 00:30:00.866 16:28:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:00.866 16:28:58 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.866 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.854 Initializing NVMe Controllers 00:30:10.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.854 Initialization complete. Launching workers. 00:30:10.854 ======================================================== 00:30:10.854 Latency(us) 00:30:10.854 Device Information : IOPS MiB/s Average min max 00:30:10.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.20 9.90 12633.00 5004.79 48012.84 00:30:10.854 ======================================================== 00:30:10.854 Total : 79.20 9.90 12633.00 5004.79 48012.84 00:30:10.854 00:30:10.854 16:29:08 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:10.854 16:29:08 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:10.854 16:29:08 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.854 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.955 Initializing NVMe Controllers 00:30:20.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.955 Initialization complete. Launching workers. 00:30:20.955 ======================================================== 00:30:20.955 Latency(us) 00:30:20.955 Device Information : IOPS MiB/s Average min max 00:30:20.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8871.40 4.33 3607.08 192.16 10163.03 00:30:20.955 ======================================================== 00:30:20.955 Total : 8871.40 4.33 3607.08 192.16 10163.03 00:30:20.955 00:30:20.955 16:29:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:20.955 16:29:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.955 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.937 Initializing NVMe Controllers 00:30:30.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.937 Initialization complete. Launching workers. 00:30:30.937 ======================================================== 00:30:30.937 Latency(us) 00:30:30.937 Device Information : IOPS MiB/s Average min max 00:30:30.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2607.40 325.93 12272.99 933.00 32465.80 00:30:30.937 ======================================================== 00:30:30.937 Total : 2607.40 325.93 12272.99 933.00 32465.80 00:30:30.937 00:30:30.937 16:29:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:30.937 16:29:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:30.937 16:29:29 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.937 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.914 Initializing NVMe Controllers 00:30:40.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.914 Controller IO queue size 128, less than required. 00:30:40.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.914 Initialization complete. Launching workers. 00:30:40.914 ======================================================== 00:30:40.914 Latency(us) 00:30:40.914 Device Information : IOPS MiB/s Average min max 00:30:40.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16174.90 7.90 7917.39 1336.74 22352.34 00:30:40.914 ======================================================== 00:30:40.914 Total : 16174.90 7.90 7917.39 1336.74 22352.34 00:30:40.914 00:30:40.914 16:29:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:40.914 16:29:39 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.914 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.130 Initializing NVMe Controllers 00:30:53.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.130 Controller IO queue size 128, less than required. 00:30:53.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:53.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:53.130 Initialization complete. Launching workers. 00:30:53.130 ======================================================== 00:30:53.130 Latency(us) 00:30:53.130 Device Information : IOPS MiB/s Average min max 00:30:53.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.47 151.56 106229.50 23325.25 191198.37 00:30:53.130 ======================================================== 00:30:53.130 Total : 1212.47 151.56 106229.50 23325.25 191198.37 00:30:53.130 00:30:53.130 16:29:50 -- host/perf.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.130 16:29:50 -- host/perf.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a167d893-43dd-4f4e-972f-44e804455e38 00:30:53.130 16:29:50 -- host/perf.sh@106 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:53.130 16:29:51 -- host/perf.sh@107 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20b12767-bb28-4597-8b43-5659a6a828cf 00:30:53.130 16:29:51 -- host/perf.sh@108 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:53.130 16:29:51 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:53.130 16:29:51 -- host/perf.sh@114 -- # nvmftestfini 00:30:53.130 16:29:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:53.130 16:29:51 -- nvmf/common.sh@116 -- # sync 00:30:53.130 16:29:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:53.130 16:29:51 -- nvmf/common.sh@119 -- # set +e 00:30:53.130 16:29:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:53.130 16:29:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:53.130 rmmod nvme_tcp 00:30:53.130 rmmod nvme_fabrics 00:30:53.130 rmmod nvme_keyring 00:30:53.130 16:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:53.130 16:29:51 -- nvmf/common.sh@123 -- # set -e 00:30:53.130 16:29:51 -- nvmf/common.sh@124 -- # return 0 00:30:53.130 16:29:51 -- nvmf/common.sh@477 -- # '[' -n 3256849 ']' 00:30:53.130 16:29:51 -- nvmf/common.sh@478 -- # killprocess 3256849 00:30:53.130 16:29:51 -- common/autotest_common.sh@926 -- # '[' -z 3256849 ']' 00:30:53.130 16:29:51 -- common/autotest_common.sh@930 -- # kill -0 3256849 00:30:53.130 16:29:51 -- common/autotest_common.sh@931 -- # uname 00:30:53.130 16:29:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:53.130 16:29:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3256849 00:30:53.130 16:29:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:53.130 16:29:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:53.130 16:29:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3256849' 00:30:53.130 killing process with pid 3256849 00:30:53.130 16:29:51 -- common/autotest_common.sh@945 -- # kill 3256849 00:30:53.130 16:29:51 -- common/autotest_common.sh@950 -- # wait 3256849 00:30:54.069 16:29:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:54.069 16:29:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:54.069 16:29:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:54.069 16:29:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.069 16:29:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:54.069 16:29:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.069 16:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:54.069 16:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.978 16:29:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:56.238 00:30:56.239 real 1m28.555s 00:30:56.239 user 5m18.001s 00:30:56.239 sys 0m12.253s 00:30:56.239 16:29:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.239 16:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:56.239 ************************************ 00:30:56.239 END TEST nvmf_perf 00:30:56.239 ************************************ 00:30:56.239 16:29:54 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:56.239 16:29:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:56.239 16:29:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.239 16:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:56.239 ************************************ 00:30:56.239 START TEST nvmf_fio_host 00:30:56.239 ************************************ 00:30:56.239 16:29:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:56.239 * Looking for test storage... 00:30:56.239 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:56.239 16:29:55 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:56.239 16:29:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.239 16:29:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.239 16:29:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.239 16:29:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@5 -- # export PATH 00:30:56.239 16:29:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.239 16:29:55 -- nvmf/common.sh@7 -- # uname -s 00:30:56.239 16:29:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.239 16:29:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.239 16:29:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.239 16:29:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.239 16:29:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.239 16:29:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.239 16:29:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.239 16:29:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.239 16:29:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.239 16:29:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.239 16:29:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:56.239 16:29:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:56.239 16:29:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.239 16:29:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.239 16:29:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:56.239 16:29:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:56.239 16:29:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.239 16:29:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.239 16:29:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.239 16:29:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- paths/export.sh@5 -- # export PATH 00:30:56.239 16:29:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.239 16:29:55 -- nvmf/common.sh@46 -- # : 0 00:30:56.239 16:29:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:56.239 16:29:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:56.239 16:29:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:56.239 16:29:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.239 16:29:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.239 16:29:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:56.239 16:29:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:56.239 16:29:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:56.239 16:29:55 -- host/fio.sh@12 -- # nvmftestinit 00:30:56.239 16:29:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:56.239 16:29:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.239 16:29:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:56.239 16:29:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:56.239 16:29:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:56.239 16:29:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.239 16:29:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.239 16:29:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.239 16:29:55 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:30:56.239 16:29:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:56.239 16:29:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:56.239 16:29:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.516 16:30:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:01.516 16:30:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:01.516 16:30:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:01.516 16:30:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:01.516 16:30:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:01.516 16:30:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:01.516 16:30:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:01.516 16:30:00 -- nvmf/common.sh@294 -- # net_devs=() 00:31:01.516 16:30:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:01.516 16:30:00 -- nvmf/common.sh@295 -- # e810=() 00:31:01.516 16:30:00 -- nvmf/common.sh@295 -- # local -ga e810 00:31:01.516 16:30:00 -- nvmf/common.sh@296 -- # x722=() 00:31:01.516 16:30:00 -- nvmf/common.sh@296 -- # local -ga x722 00:31:01.516 16:30:00 -- nvmf/common.sh@297 -- # mlx=() 00:31:01.516 16:30:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:01.517 16:30:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.517 16:30:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:01.517 16:30:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.517 16:30:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:01.517 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:01.517 16:30:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.517 16:30:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:01.517 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:01.517 16:30:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.517 16:30:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.517 16:30:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.517 16:30:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:01.517 Found net devices under 0000:27:00.0: cvl_0_0 00:31:01.517 16:30:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.517 16:30:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.517 16:30:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.517 16:30:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.517 16:30:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:01.517 Found net devices under 0000:27:00.1: cvl_0_1 00:31:01.517 16:30:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.517 16:30:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:01.517 16:30:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:01.517 16:30:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:01.517 16:30:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.517 16:30:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.517 16:30:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.517 16:30:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:01.517 16:30:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.517 16:30:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.517 16:30:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:01.517 16:30:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.517 16:30:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.517 16:30:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:01.517 16:30:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:01.517 16:30:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.517 16:30:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.517 16:30:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.517 16:30:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.778 16:30:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:01.778 16:30:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.778 16:30:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.778 16:30:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.778 16:30:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:01.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:31:01.778 00:31:01.778 --- 10.0.0.2 ping statistics --- 00:31:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.778 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:31:01.778 16:30:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:31:01.778 00:31:01.778 --- 10.0.0.1 ping statistics --- 00:31:01.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.778 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:31:01.778 16:30:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.778 16:30:00 -- nvmf/common.sh@410 -- # return 0 00:31:01.778 16:30:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:01.778 16:30:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.778 16:30:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:01.778 16:30:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:01.778 16:30:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.778 16:30:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:01.778 16:30:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:01.778 16:30:00 -- host/fio.sh@14 -- # [[ y != y ]] 00:31:01.778 16:30:00 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:31:01.778 16:30:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:01.778 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:31:01.778 16:30:00 -- host/fio.sh@22 -- # nvmfpid=3276368 00:31:01.778 16:30:00 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:01.778 16:30:00 -- host/fio.sh@26 -- # waitforlisten 3276368 00:31:01.778 16:30:00 -- common/autotest_common.sh@819 -- # '[' -z 3276368 ']' 00:31:01.778 16:30:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.778 16:30:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:01.778 16:30:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.778 16:30:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:01.778 16:30:00 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:01.778 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:31:01.778 [2024-04-23 16:30:00.693438] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:01.778 [2024-04-23 16:30:00.693558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.040 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.040 [2024-04-23 16:30:00.834196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:02.040 [2024-04-23 16:30:00.934514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:02.040 [2024-04-23 16:30:00.934739] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.040 [2024-04-23 16:30:00.934768] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.040 [2024-04-23 16:30:00.934779] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.040 [2024-04-23 16:30:00.934863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.040 [2024-04-23 16:30:00.934966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:02.040 [2024-04-23 16:30:00.935077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.040 [2024-04-23 16:30:00.935086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.612 16:30:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:02.612 16:30:01 -- common/autotest_common.sh@852 -- # return 0 00:31:02.612 16:30:01 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 [2024-04-23 16:30:01.419635] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:31:02.612 16:30:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 16:30:01 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 Malloc1 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 [2024-04-23 16:30:01.522404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.612 16:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:02.612 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:31:02.612 16:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:02.612 16:30:01 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:31:02.612 16:30:01 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.612 16:30:01 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.612 16:30:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:02.612 16:30:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.612 16:30:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:02.612 16:30:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.612 16:30:01 -- common/autotest_common.sh@1320 -- # shift 00:31:02.612 16:30:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:02.612 16:30:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.612 16:30:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.612 16:30:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:02.612 16:30:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:02.893 16:30:01 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:02.893 16:30:01 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:02.893 16:30:01 -- common/autotest_common.sh@1326 -- # break 00:31:02.893 16:30:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:02.893 16:30:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:03.154 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:03.154 fio-3.35 00:31:03.154 Starting 1 thread 00:31:03.154 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.692 00:31:05.692 test: (groupid=0, jobs=1): err= 0: pid=3276899: Tue Apr 23 16:30:04 2024 00:31:05.692 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(102MiB/2004msec) 00:31:05.692 slat (usec): min=2, max=139, avg= 2.94, stdev= 1.19 00:31:05.692 clat (usec): min=3618, max=9338, avg=5430.31, stdev=458.74 00:31:05.692 lat (usec): min=3621, max=9341, avg=5433.25, stdev=458.75 00:31:05.692 clat percentiles (usec): 00:31:05.692 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:31:05.692 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5538], 00:31:05.692 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5997], 95.00th=[ 6194], 00:31:05.692 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8586], 00:31:05.692 | 99.99th=[ 9110] 00:31:05.692 bw ( KiB/s): min=51040, max=52376, per=99.94%, avg=51880.00, stdev=611.60, samples=4 00:31:05.692 iops : min=12760, max=13094, avg=12970.00, stdev=152.90, samples=4 00:31:05.692 write: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(101MiB/2004msec); 0 zone resets 00:31:05.692 slat (nsec): min=2858, max=123417, avg=3074.69, stdev=846.30 00:31:05.692 clat (usec): min=1419, max=8849, avg=4388.28, stdev=380.68 00:31:05.692 lat (usec): min=1430, max=8852, avg=4391.36, stdev=380.67 00:31:05.692 clat percentiles (usec): 00:31:05.692 | 1.00th=[ 3523], 5.00th=[ 3818], 10.00th=[ 3949], 20.00th=[ 4113], 00:31:05.692 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:31:05.692 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5014], 00:31:05.692 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 6587], 99.95th=[ 7504], 00:31:05.692 | 99.99th=[ 8291] 00:31:05.692 bw ( KiB/s): min=51440, max=52104, per=99.97%, avg=51840.00, stdev=286.44, samples=4 00:31:05.692 iops : min=12860, max=13026, avg=12960.00, stdev=71.61, samples=4 00:31:05.692 lat (msec) : 2=0.01%, 4=6.62%, 10=93.38% 00:31:05.692 cpu : usr=75.34%, sys=19.62%, ctx=20, majf=0, minf=1526 00:31:05.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:05.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:05.692 issued rwts: total=26008,25979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:05.692 00:31:05.692 Run status group 0 (all jobs): 00:31:05.692 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=102MiB (107MB), run=2004-2004msec 00:31:05.692 WRITE: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=101MiB (106MB), run=2004-2004msec 00:31:05.692 ----------------------------------------------------- 00:31:05.692 Suppressions used: 00:31:05.692 count bytes template 00:31:05.692 1 57 /usr/src/fio/parse.c 00:31:05.692 1 8 libtcmalloc_minimal.so 00:31:05.692 ----------------------------------------------------- 00:31:05.692 00:31:05.692 16:30:04 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.692 16:30:04 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.692 16:30:04 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:05.692 16:30:04 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.692 16:30:04 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:05.692 16:30:04 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:05.692 16:30:04 -- common/autotest_common.sh@1320 -- # shift 00:31:05.692 16:30:04 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:05.692 16:30:04 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.692 16:30:04 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:05.692 16:30:04 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:05.692 16:30:04 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:05.692 16:30:04 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:05.692 16:30:04 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:05.692 16:30:04 -- common/autotest_common.sh@1326 -- # break 00:31:05.692 16:30:04 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:05.692 16:30:04 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:06.275 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:06.275 fio-3.35 00:31:06.275 Starting 1 thread 00:31:06.275 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.811 00:31:08.811 test: (groupid=0, jobs=1): err= 0: pid=3277871: Tue Apr 23 16:30:07 2024 00:31:08.811 read: IOPS=8306, BW=130MiB/s (136MB/s)(260MiB/2003msec) 00:31:08.811 slat (usec): min=2, max=144, avg= 3.91, stdev= 1.96 00:31:08.811 clat (usec): min=2661, max=21800, avg=9405.80, stdev=2801.19 00:31:08.811 lat (usec): min=2664, max=21805, avg=9409.71, stdev=2801.92 00:31:08.811 clat percentiles (usec): 00:31:08.811 | 1.00th=[ 4015], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 6849], 00:31:08.811 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[ 9896], 00:31:08.811 | 70.00th=[10814], 80.00th=[11863], 90.00th=[13435], 95.00th=[14353], 00:31:08.811 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17695], 99.95th=[18220], 00:31:08.811 | 99.99th=[21103] 00:31:08.811 bw ( KiB/s): min=53088, max=94080, per=51.87%, avg=68936.00, stdev=17930.12, samples=4 00:31:08.811 iops : min= 3318, max= 5880, avg=4308.50, stdev=1120.63, samples=4 00:31:08.811 write: IOPS=4781, BW=74.7MiB/s (78.3MB/s)(141MiB/1888msec); 0 zone resets 00:31:08.811 slat (usec): min=28, max=200, avg=40.43, stdev=11.81 00:31:08.811 clat (usec): min=2896, max=19577, avg=10177.49, stdev=2569.18 00:31:08.811 lat (usec): min=2924, max=19628, avg=10217.92, stdev=2577.67 00:31:08.811 clat percentiles (usec): 00:31:08.811 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7832], 00:31:08.811 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10683], 00:31:08.811 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13829], 95.00th=[14877], 00:31:08.811 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17171], 99.95th=[17695], 00:31:08.811 | 99.99th=[19530] 00:31:08.811 bw ( KiB/s): min=54848, max=98304, per=93.89%, avg=71832.00, stdev=18947.27, samples=4 00:31:08.811 iops : min= 3428, max= 6144, avg=4489.50, stdev=1184.20, samples=4 00:31:08.811 lat (msec) : 4=0.67%, 10=57.30%, 20=42.02%, 50=0.01% 00:31:08.811 cpu : usr=84.62%, sys=14.28%, ctx=11, majf=0, minf=2222 00:31:08.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:08.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.811 issued rwts: total=16638,9028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.811 00:31:08.811 Run status group 0 (all jobs): 00:31:08.811 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2003-2003msec 00:31:08.811 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=141MiB (148MB), run=1888-1888msec 00:31:08.811 ----------------------------------------------------- 00:31:08.811 Suppressions used: 00:31:08.811 count bytes template 00:31:08.811 1 57 /usr/src/fio/parse.c 00:31:08.811 1073 103008 /usr/src/fio/iolog.c 00:31:08.811 1 8 libtcmalloc_minimal.so 00:31:08.811 ----------------------------------------------------- 00:31:08.811 00:31:08.811 16:30:07 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.811 16:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.811 16:30:07 -- common/autotest_common.sh@10 -- # set +x 00:31:08.811 16:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.811 16:30:07 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:31:08.811 16:30:07 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:31:08.811 16:30:07 -- host/fio.sh@49 -- # get_nvme_bdfs 00:31:08.811 16:30:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:08.811 16:30:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:08.811 16:30:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:08.811 16:30:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:08.811 16:30:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:09.070 16:30:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:31:09.070 16:30:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:31:09.070 16:30:07 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 -i 10.0.0.2 00:31:09.070 16:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.070 16:30:07 -- common/autotest_common.sh@10 -- # set +x 00:31:09.330 Nvme0n1 00:31:09.330 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.330 16:30:08 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:09.330 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.330 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:09.897 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.897 16:30:08 -- host/fio.sh@51 -- # ls_guid=bc6d5734-2225-4959-a5ab-8e1c39c5df02 00:31:09.897 16:30:08 -- host/fio.sh@52 -- # get_lvs_free_mb bc6d5734-2225-4959-a5ab-8e1c39c5df02 00:31:09.897 16:30:08 -- common/autotest_common.sh@1343 -- # local lvs_uuid=bc6d5734-2225-4959-a5ab-8e1c39c5df02 00:31:09.897 16:30:08 -- common/autotest_common.sh@1344 -- # local lvs_info 00:31:09.897 16:30:08 -- common/autotest_common.sh@1345 -- # local fc 00:31:09.897 16:30:08 -- common/autotest_common.sh@1346 -- # local cs 00:31:09.897 16:30:08 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:31:09.897 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.897 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:09.897 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.897 16:30:08 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:31:09.897 { 00:31:09.897 "uuid": "bc6d5734-2225-4959-a5ab-8e1c39c5df02", 00:31:09.897 "name": "lvs_0", 00:31:09.897 "base_bdev": "Nvme0n1", 00:31:09.897 "total_data_clusters": 893, 00:31:09.897 "free_clusters": 893, 00:31:09.898 "block_size": 512, 00:31:09.898 "cluster_size": 1073741824 00:31:09.898 } 00:31:09.898 ]' 00:31:09.898 16:30:08 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="bc6d5734-2225-4959-a5ab-8e1c39c5df02") .free_clusters' 00:31:09.898 16:30:08 -- common/autotest_common.sh@1348 -- # fc=893 00:31:09.898 16:30:08 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="bc6d5734-2225-4959-a5ab-8e1c39c5df02") .cluster_size' 00:31:09.898 16:30:08 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:31:09.898 16:30:08 -- common/autotest_common.sh@1352 -- # free_mb=914432 00:31:09.898 16:30:08 -- common/autotest_common.sh@1353 -- # echo 914432 00:31:09.898 914432 00:31:09.898 16:30:08 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 914432 00:31:09.898 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.898 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:09.898 7c8e355f-fdf0-4f9b-8fad-cbe0454932fe 00:31:09.898 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.898 16:30:08 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:09.898 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.898 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:09.898 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.898 16:30:08 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:09.898 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.898 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.168 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.168 16:30:08 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:10.168 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.168 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:31:10.168 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.168 16:30:08 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.168 16:30:08 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.168 16:30:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:10.168 16:30:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.168 16:30:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:10.168 16:30:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.168 16:30:08 -- common/autotest_common.sh@1320 -- # shift 00:31:10.168 16:30:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:10.168 16:30:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.168 16:30:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:10.168 16:30:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:10.168 16:30:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:10.168 16:30:08 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:10.168 16:30:08 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:10.168 16:30:08 -- common/autotest_common.sh@1326 -- # break 00:31:10.168 16:30:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:10.168 16:30:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:10.427 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:10.427 fio-3.35 00:31:10.427 Starting 1 thread 00:31:10.427 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.958 00:31:12.958 test: (groupid=0, jobs=1): err= 0: pid=3279059: Tue Apr 23 16:30:11 2024 00:31:12.958 read: IOPS=9710, BW=37.9MiB/s (39.8MB/s)(76.1MiB/2006msec) 00:31:12.958 slat (nsec): min=1570, max=102739, avg=1863.27, stdev=1002.79 00:31:12.958 clat (usec): min=2588, max=11598, avg=7303.71, stdev=636.49 00:31:12.958 lat (usec): min=2603, max=11599, avg=7305.57, stdev=636.44 00:31:12.958 clat percentiles (usec): 00:31:12.958 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:31:12.958 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:31:12.958 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 00:31:12.958 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[10159], 99.95th=[10552], 00:31:12.958 | 99.99th=[11600] 00:31:12.958 bw ( KiB/s): min=37344, max=39816, per=99.92%, avg=38812.00, stdev=1064.22, samples=4 00:31:12.958 iops : min= 9336, max= 9954, avg=9703.00, stdev=266.06, samples=4 00:31:12.958 write: IOPS=9717, BW=38.0MiB/s (39.8MB/s)(76.1MiB/2006msec); 0 zone resets 00:31:12.958 slat (nsec): min=1656, max=548375, avg=2003.76, stdev=3973.49 00:31:12.958 clat (usec): min=2284, max=10609, avg=5811.71, stdev=548.10 00:31:12.958 lat (usec): min=2293, max=10611, avg=5813.71, stdev=548.12 00:31:12.958 clat percentiles (usec): 00:31:12.958 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5407], 00:31:12.958 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:31:12.958 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6456], 95.00th=[ 6652], 00:31:12.958 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 8356], 99.95th=[ 9634], 00:31:12.958 | 99.99th=[10552] 00:31:12.958 bw ( KiB/s): min=38080, max=39424, per=100.00%, avg=38880.00, stdev=568.84, samples=4 00:31:12.958 iops : min= 9520, max= 9856, avg=9720.00, stdev=142.21, samples=4 00:31:12.958 lat (msec) : 4=0.13%, 10=99.77%, 20=0.10% 00:31:12.958 cpu : usr=61.60%, sys=33.87%, ctx=83, majf=0, minf=1521 00:31:12.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:12.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:12.958 issued rwts: total=19480,19493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:12.958 00:31:12.958 Run status group 0 (all jobs): 00:31:12.958 READ: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:31:12.958 WRITE: bw=38.0MiB/s (39.8MB/s), 38.0MiB/s-38.0MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:31:12.958 ----------------------------------------------------- 00:31:12.958 Suppressions used: 00:31:12.958 count bytes template 00:31:12.958 1 58 /usr/src/fio/parse.c 00:31:12.958 1 8 libtcmalloc_minimal.so 00:31:12.958 ----------------------------------------------------- 00:31:12.958 00:31:12.958 16:30:11 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:12.958 16:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.958 16:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:12.958 16:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.958 16:30:11 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:12.958 16:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.958 16:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:12.958 16:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.958 16:30:11 -- host/fio.sh@62 -- # ls_nested_guid=96b231ab-75e6-4179-9eca-678134fd792a 00:31:12.958 16:30:11 -- host/fio.sh@63 -- # get_lvs_free_mb 96b231ab-75e6-4179-9eca-678134fd792a 00:31:12.959 16:30:11 -- common/autotest_common.sh@1343 -- # local lvs_uuid=96b231ab-75e6-4179-9eca-678134fd792a 00:31:12.959 16:30:11 -- common/autotest_common.sh@1344 -- # local lvs_info 00:31:12.959 16:30:11 -- common/autotest_common.sh@1345 -- # local fc 00:31:12.959 16:30:11 -- common/autotest_common.sh@1346 -- # local cs 00:31:12.959 16:30:11 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:31:12.959 16:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.959 16:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:12.959 16:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.959 16:30:11 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:31:12.959 { 00:31:12.959 "uuid": "bc6d5734-2225-4959-a5ab-8e1c39c5df02", 00:31:12.959 "name": "lvs_0", 00:31:12.959 "base_bdev": "Nvme0n1", 00:31:12.959 "total_data_clusters": 893, 00:31:12.959 "free_clusters": 0, 00:31:12.959 "block_size": 512, 00:31:12.959 "cluster_size": 1073741824 00:31:12.959 }, 00:31:12.959 { 00:31:12.959 "uuid": "96b231ab-75e6-4179-9eca-678134fd792a", 00:31:12.959 "name": "lvs_n_0", 00:31:12.959 "base_bdev": "7c8e355f-fdf0-4f9b-8fad-cbe0454932fe", 00:31:12.959 "total_data_clusters": 228384, 00:31:12.959 "free_clusters": 228384, 00:31:12.959 "block_size": 512, 00:31:12.959 "cluster_size": 4194304 00:31:12.959 } 00:31:12.959 ]' 00:31:12.959 16:30:11 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="96b231ab-75e6-4179-9eca-678134fd792a") .free_clusters' 00:31:13.217 16:30:11 -- common/autotest_common.sh@1348 -- # fc=228384 00:31:13.217 16:30:11 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="96b231ab-75e6-4179-9eca-678134fd792a") .cluster_size' 00:31:13.217 16:30:11 -- common/autotest_common.sh@1349 -- # cs=4194304 00:31:13.217 16:30:11 -- common/autotest_common.sh@1352 -- # free_mb=913536 00:31:13.217 16:30:11 -- common/autotest_common.sh@1353 -- # echo 913536 00:31:13.217 913536 00:31:13.217 16:30:11 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 913536 00:31:13.217 16:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.217 16:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:13.785 7823e011-5279-4ca4-b767-c7aa39681057 00:31:13.785 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.785 16:30:12 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:13.785 16:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.785 16:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:13.785 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.785 16:30:12 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:13.785 16:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.785 16:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:13.785 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.785 16:30:12 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:13.785 16:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.785 16:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:13.785 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.785 16:30:12 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.785 16:30:12 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.785 16:30:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:13.785 16:30:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.785 16:30:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:13.785 16:30:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.785 16:30:12 -- common/autotest_common.sh@1320 -- # shift 00:31:13.785 16:30:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:13.785 16:30:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.785 16:30:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.785 16:30:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:13.785 16:30:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:13.785 16:30:12 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:13.785 16:30:12 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:13.785 16:30:12 -- common/autotest_common.sh@1326 -- # break 00:31:13.785 16:30:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:13.785 16:30:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.043 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:14.043 fio-3.35 00:31:14.043 Starting 1 thread 00:31:14.302 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.839 00:31:16.839 test: (groupid=0, jobs=1): err= 0: pid=3279891: Tue Apr 23 16:30:15 2024 00:31:16.839 read: IOPS=8405, BW=32.8MiB/s (34.4MB/s)(65.9MiB/2006msec) 00:31:16.839 slat (nsec): min=1581, max=93885, avg=1918.08, stdev=982.85 00:31:16.839 clat (usec): min=3268, max=13634, avg=8465.17, stdev=727.48 00:31:16.839 lat (usec): min=3286, max=13636, avg=8467.09, stdev=727.44 00:31:16.839 clat percentiles (usec): 00:31:16.839 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7898], 00:31:16.839 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:31:16.839 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9634], 00:31:16.839 | 99.00th=[10290], 99.50th=[10683], 99.90th=[12256], 99.95th=[12518], 00:31:16.839 | 99.99th=[13566] 00:31:16.839 bw ( KiB/s): min=32088, max=34344, per=99.88%, avg=33580.00, stdev=1045.55, samples=4 00:31:16.839 iops : min= 8022, max= 8586, avg=8395.00, stdev=261.39, samples=4 00:31:16.839 write: IOPS=8400, BW=32.8MiB/s (34.4MB/s)(65.8MiB/2006msec); 0 zone resets 00:31:16.839 slat (nsec): min=1662, max=80146, avg=2040.83, stdev=696.96 00:31:16.839 clat (usec): min=2240, max=12191, avg=6717.83, stdev=633.07 00:31:16.839 lat (usec): min=2248, max=12193, avg=6719.87, stdev=633.05 00:31:16.839 clat percentiles (usec): 00:31:16.839 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6259], 00:31:16.839 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:31:16.839 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7701], 00:31:16.839 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9896], 99.95th=[11600], 00:31:16.839 | 99.99th=[12125] 00:31:16.839 bw ( KiB/s): min=32920, max=33856, per=99.97%, avg=33590.00, stdev=447.68, samples=4 00:31:16.839 iops : min= 8230, max= 8464, avg=8397.50, stdev=111.92, samples=4 00:31:16.839 lat (msec) : 4=0.09%, 10=98.82%, 20=1.09% 00:31:16.839 cpu : usr=63.79%, sys=32.12%, ctx=62, majf=0, minf=1522 00:31:16.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:16.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:16.839 issued rwts: total=16861,16851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:16.839 00:31:16.839 Run status group 0 (all jobs): 00:31:16.839 READ: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=65.9MiB (69.1MB), run=2006-2006msec 00:31:16.839 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=65.8MiB (69.0MB), run=2006-2006msec 00:31:16.839 ----------------------------------------------------- 00:31:16.839 Suppressions used: 00:31:16.839 count bytes template 00:31:16.839 1 58 /usr/src/fio/parse.c 00:31:16.839 1 8 libtcmalloc_minimal.so 00:31:16.839 ----------------------------------------------------- 00:31:16.839 00:31:16.839 16:30:15 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:16.839 16:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.839 16:30:15 -- common/autotest_common.sh@10 -- # set +x 00:31:16.839 16:30:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.839 16:30:15 -- host/fio.sh@72 -- # sync 00:31:16.839 16:30:15 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:16.839 16:30:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.839 16:30:15 -- common/autotest_common.sh@10 -- # set +x 00:31:18.216 16:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.216 16:30:16 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:31:18.216 16:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.216 16:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:18.216 16:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.216 16:30:16 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:31:18.216 16:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.216 16:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:18.783 16:30:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.783 16:30:17 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:31:18.783 16:30:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.783 16:30:17 -- common/autotest_common.sh@10 -- # set +x 00:31:18.783 16:30:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.783 16:30:17 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:31:18.783 16:30:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.783 16:30:17 -- common/autotest_common.sh@10 -- # set +x 00:31:19.719 16:30:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.719 16:30:18 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:31:19.719 16:30:18 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:31:19.719 16:30:18 -- host/fio.sh@84 -- # nvmftestfini 00:31:19.719 16:30:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:19.719 16:30:18 -- nvmf/common.sh@116 -- # sync 00:31:19.719 16:30:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:19.719 16:30:18 -- nvmf/common.sh@119 -- # set +e 00:31:19.719 16:30:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:19.719 16:30:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:19.719 rmmod nvme_tcp 00:31:19.719 rmmod nvme_fabrics 00:31:19.719 rmmod nvme_keyring 00:31:19.719 16:30:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:19.719 16:30:18 -- nvmf/common.sh@123 -- # set -e 00:31:19.719 16:30:18 -- nvmf/common.sh@124 -- # return 0 00:31:19.719 16:30:18 -- nvmf/common.sh@477 -- # '[' -n 3276368 ']' 00:31:19.719 16:30:18 -- nvmf/common.sh@478 -- # killprocess 3276368 00:31:19.719 16:30:18 -- common/autotest_common.sh@926 -- # '[' -z 3276368 ']' 00:31:19.719 16:30:18 -- common/autotest_common.sh@930 -- # kill -0 3276368 00:31:19.719 16:30:18 -- common/autotest_common.sh@931 -- # uname 00:31:19.719 16:30:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:19.719 16:30:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3276368 00:31:19.719 16:30:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:19.719 16:30:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:19.719 16:30:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3276368' 00:31:19.719 killing process with pid 3276368 00:31:19.719 16:30:18 -- common/autotest_common.sh@945 -- # kill 3276368 00:31:19.719 16:30:18 -- common/autotest_common.sh@950 -- # wait 3276368 00:31:20.288 16:30:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:20.288 16:30:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:20.288 16:30:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:20.288 16:30:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.288 16:30:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:20.288 16:30:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.288 16:30:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.288 16:30:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.194 16:30:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:22.194 00:31:22.194 real 0m26.056s 00:31:22.194 user 2m25.185s 00:31:22.194 sys 0m8.915s 00:31:22.194 16:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.194 16:30:21 -- common/autotest_common.sh@10 -- # set +x 00:31:22.194 ************************************ 00:31:22.194 END TEST nvmf_fio_host 00:31:22.194 ************************************ 00:31:22.194 16:30:21 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:22.194 16:30:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:22.194 16:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:22.194 16:30:21 -- common/autotest_common.sh@10 -- # set +x 00:31:22.194 ************************************ 00:31:22.194 START TEST nvmf_failover 00:31:22.194 ************************************ 00:31:22.194 16:30:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:22.194 * Looking for test storage... 00:31:22.453 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:31:22.453 16:30:21 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.453 16:30:21 -- nvmf/common.sh@7 -- # uname -s 00:31:22.453 16:30:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.453 16:30:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.453 16:30:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.453 16:30:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.453 16:30:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.453 16:30:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.453 16:30:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.453 16:30:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.453 16:30:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.453 16:30:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.453 16:30:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:22.453 16:30:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:22.453 16:30:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.453 16:30:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.453 16:30:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:22.453 16:30:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:22.453 16:30:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.453 16:30:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.453 16:30:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.453 16:30:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.453 16:30:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.453 16:30:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.453 16:30:21 -- paths/export.sh@5 -- # export PATH 00:31:22.453 16:30:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.453 16:30:21 -- nvmf/common.sh@46 -- # : 0 00:31:22.453 16:30:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:22.453 16:30:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:22.453 16:30:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:22.453 16:30:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.453 16:30:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.453 16:30:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:22.453 16:30:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:22.453 16:30:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:22.453 16:30:21 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.453 16:30:21 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.453 16:30:21 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:31:22.453 16:30:21 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:22.453 16:30:21 -- host/failover.sh@18 -- # nvmftestinit 00:31:22.453 16:30:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:22.453 16:30:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.453 16:30:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:22.453 16:30:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:22.453 16:30:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:22.453 16:30:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.453 16:30:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.454 16:30:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.454 16:30:21 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:31:22.454 16:30:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:22.454 16:30:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:22.454 16:30:21 -- common/autotest_common.sh@10 -- # set +x 00:31:27.815 16:30:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:27.815 16:30:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:27.815 16:30:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:27.815 16:30:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:27.815 16:30:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:27.815 16:30:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:27.815 16:30:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:27.815 16:30:26 -- nvmf/common.sh@294 -- # net_devs=() 00:31:27.815 16:30:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:27.815 16:30:26 -- nvmf/common.sh@295 -- # e810=() 00:31:27.815 16:30:26 -- nvmf/common.sh@295 -- # local -ga e810 00:31:27.815 16:30:26 -- nvmf/common.sh@296 -- # x722=() 00:31:27.815 16:30:26 -- nvmf/common.sh@296 -- # local -ga x722 00:31:27.815 16:30:26 -- nvmf/common.sh@297 -- # mlx=() 00:31:27.815 16:30:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:27.815 16:30:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.815 16:30:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:27.815 16:30:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:27.815 16:30:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:27.815 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:27.815 16:30:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:27.815 16:30:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:27.815 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:27.815 16:30:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:27.815 16:30:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.815 16:30:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.815 16:30:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:27.815 Found net devices under 0000:27:00.0: cvl_0_0 00:31:27.815 16:30:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.815 16:30:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:27.815 16:30:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.815 16:30:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.815 16:30:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:27.815 Found net devices under 0000:27:00.1: cvl_0_1 00:31:27.815 16:30:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.815 16:30:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:27.815 16:30:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:27.815 16:30:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:27.815 16:30:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.815 16:30:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.815 16:30:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.815 16:30:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:27.815 16:30:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.815 16:30:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.815 16:30:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:27.815 16:30:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.815 16:30:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.815 16:30:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:27.815 16:30:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:27.815 16:30:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.815 16:30:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.075 16:30:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.075 16:30:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.075 16:30:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:28.075 16:30:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.075 16:30:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.075 16:30:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.075 16:30:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:28.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:31:28.075 00:31:28.075 --- 10.0.0.2 ping statistics --- 00:31:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.075 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:28.075 16:30:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:31:28.075 00:31:28.075 --- 10.0.0.1 ping statistics --- 00:31:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.075 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:31:28.075 16:30:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.075 16:30:26 -- nvmf/common.sh@410 -- # return 0 00:31:28.075 16:30:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:28.075 16:30:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.075 16:30:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:28.075 16:30:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:28.075 16:30:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.075 16:30:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:28.075 16:30:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:28.075 16:30:26 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:28.075 16:30:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:28.075 16:30:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:28.075 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:31:28.075 16:30:26 -- nvmf/common.sh@469 -- # nvmfpid=3284971 00:31:28.075 16:30:26 -- nvmf/common.sh@470 -- # waitforlisten 3284971 00:31:28.075 16:30:26 -- common/autotest_common.sh@819 -- # '[' -z 3284971 ']' 00:31:28.075 16:30:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.075 16:30:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:28.075 16:30:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.075 16:30:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:28.075 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:31:28.075 16:30:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:28.075 [2024-04-23 16:30:27.001876] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:28.075 [2024-04-23 16:30:27.001978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.335 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.335 [2024-04-23 16:30:27.122613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:28.335 [2024-04-23 16:30:27.217362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:28.335 [2024-04-23 16:30:27.217533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.335 [2024-04-23 16:30:27.217547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.335 [2024-04-23 16:30:27.217556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.335 [2024-04-23 16:30:27.217609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.335 [2024-04-23 16:30:27.217721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.335 [2024-04-23 16:30:27.217732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.904 16:30:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.904 16:30:27 -- common/autotest_common.sh@852 -- # return 0 00:31:28.904 16:30:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:28.904 16:30:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:28.904 16:30:27 -- common/autotest_common.sh@10 -- # set +x 00:31:28.904 16:30:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.904 16:30:27 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:29.164 [2024-04-23 16:30:27.869012] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.164 16:30:27 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:29.164 Malloc0 00:31:29.164 16:30:28 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.424 16:30:28 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:29.685 16:30:28 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.685 [2024-04-23 16:30:28.525179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.685 16:30:28 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:29.945 [2024-04-23 16:30:28.709350] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:29.945 16:30:28 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:29.945 [2024-04-23 16:30:28.861548] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:30.206 16:30:28 -- host/failover.sh@31 -- # bdevperf_pid=3285395 00:31:30.206 16:30:28 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.206 16:30:28 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:30.206 16:30:28 -- host/failover.sh@34 -- # waitforlisten 3285395 /var/tmp/bdevperf.sock 00:31:30.206 16:30:28 -- common/autotest_common.sh@819 -- # '[' -z 3285395 ']' 00:31:30.206 16:30:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.206 16:30:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.206 16:30:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.206 16:30:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.206 16:30:28 -- common/autotest_common.sh@10 -- # set +x 00:31:30.772 16:30:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:30.772 16:30:29 -- common/autotest_common.sh@852 -- # return 0 00:31:30.772 16:30:29 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:31.030 NVMe0n1 00:31:31.030 16:30:29 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:31.598 00:31:31.598 16:30:30 -- host/failover.sh@39 -- # run_test_pid=3285614 00:31:31.598 16:30:30 -- host/failover.sh@41 -- # sleep 1 00:31:31.598 16:30:30 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.535 16:30:31 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.535 [2024-04-23 16:30:31.403351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.535 [2024-04-23 16:30:31.403910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 [2024-04-23 16:30:31.403917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 [2024-04-23 16:30:31.403926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 [2024-04-23 16:30:31.403935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 [2024-04-23 16:30:31.403943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 [2024-04-23 16:30:31.403951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:31:32.536 16:30:31 -- host/failover.sh@45 -- # sleep 3 00:31:35.816 16:30:34 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.816 00:31:36.075 16:30:34 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:36.075 [2024-04-23 16:30:34.874867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.075 [2024-04-23 16:30:34.874985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.874992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 [2024-04-23 16:30:34.875357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:31:36.076 16:30:34 -- host/failover.sh@50 -- # sleep 3 00:31:39.361 16:30:37 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.361 [2024-04-23 16:30:38.035789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.361 16:30:38 -- host/failover.sh@55 -- # sleep 1 00:31:40.293 16:30:39 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:40.293 [2024-04-23 16:30:39.181345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 [2024-04-23 16:30:39.181738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:31:40.293 16:30:39 -- host/failover.sh@59 -- # wait 3285614 00:31:46.863 0 00:31:46.863 16:30:45 -- host/failover.sh@61 -- # killprocess 3285395 00:31:46.863 16:30:45 -- common/autotest_common.sh@926 -- # '[' -z 3285395 ']' 00:31:46.863 16:30:45 -- common/autotest_common.sh@930 -- # kill -0 3285395 00:31:46.863 16:30:45 -- common/autotest_common.sh@931 -- # uname 00:31:46.863 16:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:46.863 16:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3285395 00:31:46.863 16:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:46.863 16:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:46.863 16:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3285395' 00:31:46.863 killing process with pid 3285395 00:31:46.863 16:30:45 -- common/autotest_common.sh@945 -- # kill 3285395 00:31:46.863 16:30:45 -- common/autotest_common.sh@950 -- # wait 3285395 00:31:46.863 16:30:45 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:46.863 [2024-04-23 16:30:28.967550] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:46.863 [2024-04-23 16:30:28.967716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285395 ] 00:31:46.863 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.863 [2024-04-23 16:30:29.102509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.863 [2024-04-23 16:30:29.194741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.863 Running I/O for 15 seconds... 00:31:46.863 [2024-04-23 16:30:31.404429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.863 [2024-04-23 16:30:31.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.863 [2024-04-23 16:30:31.404705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.404986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.404994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.864 [2024-04-23 16:30:31.405251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.864 [2024-04-23 16:30:31.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.864 [2024-04-23 16:30:31.405445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.864 [2024-04-23 16:30:31.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.405946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.405982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.405992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.406000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.406033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.406077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.865 [2024-04-23 16:30:31.406115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.406134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.865 [2024-04-23 16:30:31.406144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.865 [2024-04-23 16:30:31.406152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.866 [2024-04-23 16:30:31.406710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.866 [2024-04-23 16:30:31.406867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.866 [2024-04-23 16:30:31.406881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6130000042c0 is same with the state(5) to be set 00:31:46.866 [2024-04-23 16:30:31.406900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:46.866 [2024-04-23 16:30:31.406909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:46.867 [2024-04-23 16:30:31.406919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:8 PRP1 0x0 PRP2 0x0 00:31:46.867 [2024-04-23 16:30:31.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:31.407067] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6130000042c0 was disconnected and freed. reset controller. 00:31:46.867 [2024-04-23 16:30:31.407095] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:46.867 [2024-04-23 16:30:31.407131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.867 [2024-04-23 16:30:31.407143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:31.407154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.867 [2024-04-23 16:30:31.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:31.407173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.867 [2024-04-23 16:30:31.407180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:31.407190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.867 [2024-04-23 16:30:31.407198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:31.407207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.867 [2024-04-23 16:30:31.408950] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.867 [2024-04-23 16:30:31.408983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:31:46.867 [2024-04-23 16:30:31.478281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:46.867 [2024-04-23 16:30:34.875491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.875990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.875998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.867 [2024-04-23 16:30:34.876126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.867 [2024-04-23 16:30:34.876137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.868 [2024-04-23 16:30:34.876656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.868 [2024-04-23 16:30:34.876709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.868 [2024-04-23 16:30:34.876719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.876910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.876928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.876964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.876982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.876992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.869 [2024-04-23 16:30:34.877291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.869 [2024-04-23 16:30:34.877431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.869 [2024-04-23 16:30:34.877444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.870 [2024-04-23 16:30:34.877751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:34.877879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.877888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004640 is same with the state(5) to be set 00:31:46.870 [2024-04-23 16:30:34.877901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:46.870 [2024-04-23 16:30:34.877910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:46.870 [2024-04-23 16:30:34.877921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25992 len:8 PRP1 0x0 PRP2 0x0 00:31:46.870 [2024-04-23 16:30:34.877931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.878053] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004640 was disconnected and freed. reset controller. 00:31:46.870 [2024-04-23 16:30:34.878073] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:46.870 [2024-04-23 16:30:34.878108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.870 [2024-04-23 16:30:34.878122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.878135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.870 [2024-04-23 16:30:34.878148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.878159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.870 [2024-04-23 16:30:34.878171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.878184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.870 [2024-04-23 16:30:34.878194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:34.878203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.870 [2024-04-23 16:30:34.880013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.870 [2024-04-23 16:30:34.880046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:31:46.870 [2024-04-23 16:30:34.905754] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:46.870 [2024-04-23 16:30:39.181860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:39.181913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:39.181938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:39.181947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:39.181958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:39.181966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:39.181977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.870 [2024-04-23 16:30:39.181985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.870 [2024-04-23 16:30:39.181995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.871 [2024-04-23 16:30:39.182360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.871 [2024-04-23 16:30:39.182378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.871 [2024-04-23 16:30:39.182666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.871 [2024-04-23 16:30:39.182699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.871 [2024-04-23 16:30:39.182707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.182897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.182985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.183002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.183120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.183157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.872 [2024-04-23 16:30:39.183193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.872 [2024-04-23 16:30:39.183274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.872 [2024-04-23 16:30:39.183282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.183980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.183990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.873 [2024-04-23 16:30:39.183998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.873 [2024-04-23 16:30:39.184009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.873 [2024-04-23 16:30:39.184017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.874 [2024-04-23 16:30:39.184035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.874 [2024-04-23 16:30:39.184052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:46.874 [2024-04-23 16:30:39.184090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:46.874 [2024-04-23 16:30:39.184235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000004d40 is same with the state(5) to be set 00:31:46.874 [2024-04-23 16:30:39.184260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:46.874 [2024-04-23 16:30:39.184268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:46.874 [2024-04-23 16:30:39.184278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129496 len:8 PRP1 0x0 PRP2 0x0 00:31:46.874 [2024-04-23 16:30:39.184288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184421] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x613000004d40 was disconnected and freed. reset controller. 00:31:46.874 [2024-04-23 16:30:39.184438] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:46.874 [2024-04-23 16:30:39.184473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.874 [2024-04-23 16:30:39.184490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.874 [2024-04-23 16:30:39.184519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.874 [2024-04-23 16:30:39.184540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:46.874 [2024-04-23 16:30:39.184557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:46.874 [2024-04-23 16:30:39.184566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:46.874 [2024-04-23 16:30:39.186354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:46.874 [2024-04-23 16:30:39.186390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:31:46.874 [2024-04-23 16:30:39.259594] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:46.874 00:31:46.874 Latency(us) 00:31:46.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.874 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:46.874 Verification LBA range: start 0x0 length 0x4000 00:31:46.874 NVMe0n1 : 15.00 17679.61 69.06 854.49 0.00 6893.80 914.05 12417.35 00:31:46.874 =================================================================================================================== 00:31:46.874 Total : 17679.61 69.06 854.49 0.00 6893.80 914.05 12417.35 00:31:46.874 Received shutdown signal, test time was about 15.000000 seconds 00:31:46.874 00:31:46.874 Latency(us) 00:31:46.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.874 =================================================================================================================== 00:31:46.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.874 16:30:45 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:46.874 16:30:45 -- host/failover.sh@65 -- # count=3 00:31:46.874 16:30:45 -- host/failover.sh@67 -- # (( count != 3 )) 00:31:46.874 16:30:45 -- host/failover.sh@73 -- # bdevperf_pid=3288599 00:31:46.874 16:30:45 -- host/failover.sh@75 -- # waitforlisten 3288599 /var/tmp/bdevperf.sock 00:31:46.874 16:30:45 -- common/autotest_common.sh@819 -- # '[' -z 3288599 ']' 00:31:46.874 16:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.874 16:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.874 16:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.874 16:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.874 16:30:45 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:46.874 16:30:45 -- common/autotest_common.sh@10 -- # set +x 00:31:47.815 16:30:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.815 16:30:46 -- common/autotest_common.sh@852 -- # return 0 00:31:47.815 16:30:46 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:47.815 [2024-04-23 16:30:46.689308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:47.815 16:30:46 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:48.073 [2024-04-23 16:30:46.841310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:48.073 16:30:46 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.331 NVMe0n1 00:31:48.331 16:30:47 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.589 00:31:48.589 16:30:47 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.860 00:31:48.860 16:30:47 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:48.860 16:30:47 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:49.120 16:30:47 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.121 16:30:48 -- host/failover.sh@87 -- # sleep 3 00:31:52.413 16:30:51 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:52.413 16:30:51 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:52.413 16:30:51 -- host/failover.sh@90 -- # run_test_pid=3289782 00:31:52.413 16:30:51 -- host/failover.sh@92 -- # wait 3289782 00:31:52.413 16:30:51 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:53.789 0 00:31:53.789 16:30:52 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:53.789 [2024-04-23 16:30:45.839654] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:31:53.789 [2024-04-23 16:30:45.839783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288599 ] 00:31:53.789 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.789 [2024-04-23 16:30:45.957901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.789 [2024-04-23 16:30:46.049472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.789 [2024-04-23 16:30:48.026470] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:53.789 [2024-04-23 16:30:48.026534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.789 [2024-04-23 16:30:48.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.789 [2024-04-23 16:30:48.026560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.789 [2024-04-23 16:30:48.026568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.789 [2024-04-23 16:30:48.026577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.789 [2024-04-23 16:30:48.026585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.789 [2024-04-23 16:30:48.026593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.789 [2024-04-23 16:30:48.026602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.789 [2024-04-23 16:30:48.026609] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:53.789 [2024-04-23 16:30:48.026659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:53.789 [2024-04-23 16:30:48.026680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003300 (9): Bad file descriptor 00:31:53.789 [2024-04-23 16:30:48.078884] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:53.789 Running I/O for 1 seconds... 00:31:53.789 00:31:53.789 Latency(us) 00:31:53.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.789 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:53.789 Verification LBA range: start 0x0 length 0x4000 00:31:53.789 NVMe0n1 : 1.00 17849.87 69.73 0.00 0.00 7143.01 776.08 8312.72 00:31:53.789 =================================================================================================================== 00:31:53.789 Total : 17849.87 69.73 0.00 0.00 7143.01 776.08 8312.72 00:31:53.789 16:30:52 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.789 16:30:52 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:53.789 16:30:52 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:53.789 16:30:52 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.789 16:30:52 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:54.050 16:30:52 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:54.050 16:30:52 -- host/failover.sh@101 -- # sleep 3 00:31:57.332 16:30:55 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.332 16:30:55 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:57.332 16:30:56 -- host/failover.sh@108 -- # killprocess 3288599 00:31:57.332 16:30:56 -- common/autotest_common.sh@926 -- # '[' -z 3288599 ']' 00:31:57.332 16:30:56 -- common/autotest_common.sh@930 -- # kill -0 3288599 00:31:57.332 16:30:56 -- common/autotest_common.sh@931 -- # uname 00:31:57.332 16:30:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.332 16:30:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3288599 00:31:57.332 16:30:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:57.332 16:30:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:57.332 16:30:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3288599' 00:31:57.332 killing process with pid 3288599 00:31:57.332 16:30:56 -- common/autotest_common.sh@945 -- # kill 3288599 00:31:57.332 16:30:56 -- common/autotest_common.sh@950 -- # wait 3288599 00:31:57.590 16:30:56 -- host/failover.sh@110 -- # sync 00:31:57.590 16:30:56 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.849 16:30:56 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:57.849 16:30:56 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:57.849 16:30:56 -- host/failover.sh@116 -- # nvmftestfini 00:31:57.849 16:30:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:57.849 16:30:56 -- nvmf/common.sh@116 -- # sync 00:31:57.849 16:30:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:57.849 16:30:56 -- nvmf/common.sh@119 -- # set +e 00:31:57.849 16:30:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:57.849 16:30:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:57.849 rmmod nvme_tcp 00:31:57.849 rmmod nvme_fabrics 00:31:57.849 rmmod nvme_keyring 00:31:57.849 16:30:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:57.849 16:30:56 -- nvmf/common.sh@123 -- # set -e 00:31:57.849 16:30:56 -- nvmf/common.sh@124 -- # return 0 00:31:57.849 16:30:56 -- nvmf/common.sh@477 -- # '[' -n 3284971 ']' 00:31:57.849 16:30:56 -- nvmf/common.sh@478 -- # killprocess 3284971 00:31:57.849 16:30:56 -- common/autotest_common.sh@926 -- # '[' -z 3284971 ']' 00:31:57.849 16:30:56 -- common/autotest_common.sh@930 -- # kill -0 3284971 00:31:57.849 16:30:56 -- common/autotest_common.sh@931 -- # uname 00:31:57.849 16:30:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.849 16:30:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284971 00:31:57.849 16:30:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:57.849 16:30:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:57.849 16:30:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284971' 00:31:57.849 killing process with pid 3284971 00:31:57.849 16:30:56 -- common/autotest_common.sh@945 -- # kill 3284971 00:31:57.849 16:30:56 -- common/autotest_common.sh@950 -- # wait 3284971 00:31:58.421 16:30:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:58.421 16:30:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:58.421 16:30:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:58.421 16:30:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.421 16:30:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:58.421 16:30:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.421 16:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.421 16:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.326 16:30:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:00.326 00:32:00.326 real 0m38.202s 00:32:00.326 user 2m0.570s 00:32:00.326 sys 0m7.223s 00:32:00.327 16:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.587 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:32:00.587 ************************************ 00:32:00.587 END TEST nvmf_failover 00:32:00.587 ************************************ 00:32:00.587 16:30:59 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:00.587 16:30:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:00.587 16:30:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.587 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:32:00.587 ************************************ 00:32:00.587 START TEST nvmf_discovery 00:32:00.587 ************************************ 00:32:00.587 16:30:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:00.587 * Looking for test storage... 00:32:00.587 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:00.587 16:30:59 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.587 16:30:59 -- nvmf/common.sh@7 -- # uname -s 00:32:00.587 16:30:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.587 16:30:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.587 16:30:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.587 16:30:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.587 16:30:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.587 16:30:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.587 16:30:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.587 16:30:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.587 16:30:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.587 16:30:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.587 16:30:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:00.587 16:30:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:00.587 16:30:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.587 16:30:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.587 16:30:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:00.587 16:30:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:00.587 16:30:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.587 16:30:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.587 16:30:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.587 16:30:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.587 16:30:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.587 16:30:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.587 16:30:59 -- paths/export.sh@5 -- # export PATH 00:32:00.587 16:30:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.587 16:30:59 -- nvmf/common.sh@46 -- # : 0 00:32:00.587 16:30:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:00.587 16:30:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:00.587 16:30:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:00.587 16:30:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.587 16:30:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.588 16:30:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:00.588 16:30:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:00.588 16:30:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:00.588 16:30:59 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:00.588 16:30:59 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:00.588 16:30:59 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:00.588 16:30:59 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:00.588 16:30:59 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:00.588 16:30:59 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:00.588 16:30:59 -- host/discovery.sh@25 -- # nvmftestinit 00:32:00.588 16:30:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:00.588 16:30:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.588 16:30:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:00.588 16:30:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:00.588 16:30:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:00.588 16:30:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.588 16:30:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.588 16:30:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.588 16:30:59 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:00.588 16:30:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:00.588 16:30:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:00.588 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:32:05.861 16:31:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:05.861 16:31:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:05.861 16:31:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:05.861 16:31:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:05.861 16:31:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:05.861 16:31:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:05.861 16:31:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:05.861 16:31:04 -- nvmf/common.sh@294 -- # net_devs=() 00:32:05.861 16:31:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:05.861 16:31:04 -- nvmf/common.sh@295 -- # e810=() 00:32:05.861 16:31:04 -- nvmf/common.sh@295 -- # local -ga e810 00:32:05.861 16:31:04 -- nvmf/common.sh@296 -- # x722=() 00:32:05.861 16:31:04 -- nvmf/common.sh@296 -- # local -ga x722 00:32:05.861 16:31:04 -- nvmf/common.sh@297 -- # mlx=() 00:32:05.861 16:31:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:05.861 16:31:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.861 16:31:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:05.861 16:31:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:05.861 16:31:04 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:05.861 16:31:04 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:05.861 16:31:04 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:05.861 16:31:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:05.861 16:31:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.861 16:31:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:05.861 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:05.861 16:31:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.862 16:31:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:05.862 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:05.862 16:31:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:05.862 16:31:04 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.862 16:31:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.862 16:31:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.862 16:31:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.862 16:31:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:05.862 Found net devices under 0000:27:00.0: cvl_0_0 00:32:05.862 16:31:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.862 16:31:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.862 16:31:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.862 16:31:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.862 16:31:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.862 16:31:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:05.862 Found net devices under 0000:27:00.1: cvl_0_1 00:32:05.862 16:31:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.862 16:31:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:05.862 16:31:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:05.862 16:31:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:05.862 16:31:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:05.862 16:31:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.862 16:31:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.862 16:31:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.862 16:31:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:05.862 16:31:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.862 16:31:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.862 16:31:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:05.862 16:31:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.862 16:31:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.862 16:31:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:05.862 16:31:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:05.862 16:31:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.862 16:31:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.862 16:31:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.862 16:31:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.862 16:31:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:05.862 16:31:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.119 16:31:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.119 16:31:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.119 16:31:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:06.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:32:06.119 00:32:06.119 --- 10.0.0.2 ping statistics --- 00:32:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.119 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:06.119 16:31:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:32:06.119 00:32:06.119 --- 10.0.0.1 ping statistics --- 00:32:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.119 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:32:06.119 16:31:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.119 16:31:04 -- nvmf/common.sh@410 -- # return 0 00:32:06.119 16:31:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:06.119 16:31:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.119 16:31:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:06.119 16:31:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:06.119 16:31:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.119 16:31:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:06.119 16:31:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:06.119 16:31:04 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:06.119 16:31:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:06.119 16:31:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:06.119 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 16:31:04 -- nvmf/common.sh@469 -- # nvmfpid=3294851 00:32:06.119 16:31:04 -- nvmf/common.sh@470 -- # waitforlisten 3294851 00:32:06.119 16:31:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:06.119 16:31:04 -- common/autotest_common.sh@819 -- # '[' -z 3294851 ']' 00:32:06.119 16:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.119 16:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:06.119 16:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.119 16:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:06.119 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:32:06.119 [2024-04-23 16:31:04.975837] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:06.119 [2024-04-23 16:31:04.975937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.377 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.377 [2024-04-23 16:31:05.093057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.377 [2024-04-23 16:31:05.189123] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:06.377 [2024-04-23 16:31:05.189285] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.377 [2024-04-23 16:31:05.189299] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.377 [2024-04-23 16:31:05.189308] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.377 [2024-04-23 16:31:05.189336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.944 16:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:06.944 16:31:05 -- common/autotest_common.sh@852 -- # return 0 00:32:06.944 16:31:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:06.944 16:31:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 16:31:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.944 16:31:05 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.944 16:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 [2024-04-23 16:31:05.704691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.944 16:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.944 16:31:05 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:06.944 16:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 [2024-04-23 16:31:05.712852] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:06.944 16:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.944 16:31:05 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:06.944 16:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 null0 00:32:06.944 16:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.944 16:31:05 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:06.944 16:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 null1 00:32:06.944 16:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.944 16:31:05 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:06.944 16:31:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 16:31:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.944 16:31:05 -- host/discovery.sh@45 -- # hostpid=3294906 00:32:06.944 16:31:05 -- host/discovery.sh@46 -- # waitforlisten 3294906 /tmp/host.sock 00:32:06.944 16:31:05 -- common/autotest_common.sh@819 -- # '[' -z 3294906 ']' 00:32:06.944 16:31:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:32:06.944 16:31:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:06.944 16:31:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:06.944 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:06.944 16:31:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:06.944 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:32:06.944 16:31:05 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:06.944 [2024-04-23 16:31:05.817167] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:06.944 [2024-04-23 16:31:05.817277] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294906 ] 00:32:07.204 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.204 [2024-04-23 16:31:05.933025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.204 [2024-04-23 16:31:06.022326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:07.204 [2024-04-23 16:31:06.022510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.772 16:31:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:07.772 16:31:06 -- common/autotest_common.sh@852 -- # return 0 00:32:07.772 16:31:06 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:07.772 16:31:06 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@72 -- # notify_id=0 00:32:07.772 16:31:06 -- host/discovery.sh@78 -- # get_subsystem_names 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # sort 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # xargs 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:32:07.772 16:31:06 -- host/discovery.sh@79 -- # get_bdev_list 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # sort 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # xargs 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:32:07.772 16:31:06 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@82 -- # get_subsystem_names 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # sort 00:32:07.772 16:31:06 -- host/discovery.sh@59 -- # xargs 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:07.772 16:31:06 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:32:07.772 16:31:06 -- host/discovery.sh@83 -- # get_bdev_list 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.772 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:07.772 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # xargs 00:32:07.772 16:31:06 -- host/discovery.sh@55 -- # sort 00:32:07.772 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:08.031 16:31:06 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@86 -- # get_subsystem_names 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # sort 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # xargs 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:32:08.031 16:31:06 -- host/discovery.sh@87 -- # get_bdev_list 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # xargs 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # sort 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:08.031 16:31:06 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 [2024-04-23 16:31:06.821052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@92 -- # get_subsystem_names 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # xargs 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- host/discovery.sh@59 -- # sort 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:08.031 16:31:06 -- host/discovery.sh@93 -- # get_bdev_list 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # xargs 00:32:08.031 16:31:06 -- host/discovery.sh@55 -- # sort 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:32:08.031 16:31:06 -- host/discovery.sh@94 -- # get_notification_count 00:32:08.031 16:31:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:08.031 16:31:06 -- host/discovery.sh@74 -- # jq '. | length' 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@74 -- # notification_count=0 00:32:08.031 16:31:06 -- host/discovery.sh@75 -- # notify_id=0 00:32:08.031 16:31:06 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:08.031 16:31:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:08.031 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:32:08.031 16:31:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:08.031 16:31:06 -- host/discovery.sh@100 -- # sleep 1 00:32:08.966 [2024-04-23 16:31:07.590942] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:08.966 [2024-04-23 16:31:07.590974] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:08.966 [2024-04-23 16:31:07.590998] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:08.966 [2024-04-23 16:31:07.679053] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:09.224 [2024-04-23 16:31:07.904809] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:09.224 [2024-04-23 16:31:07.904840] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:09.224 16:31:07 -- host/discovery.sh@101 -- # get_subsystem_names 00:32:09.224 16:31:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.224 16:31:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.224 16:31:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.224 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:32:09.224 16:31:07 -- host/discovery.sh@59 -- # sort 00:32:09.224 16:31:07 -- host/discovery.sh@59 -- # xargs 00:32:09.224 16:31:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.224 16:31:07 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.224 16:31:07 -- host/discovery.sh@102 -- # get_bdev_list 00:32:09.225 16:31:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.225 16:31:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.225 16:31:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.225 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:32:09.225 16:31:07 -- host/discovery.sh@55 -- # sort 00:32:09.225 16:31:07 -- host/discovery.sh@55 -- # xargs 00:32:09.225 16:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:32:09.225 16:31:08 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:09.225 16:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.225 16:31:08 -- common/autotest_common.sh@10 -- # set +x 00:32:09.225 16:31:08 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:09.225 16:31:08 -- host/discovery.sh@63 -- # sort -n 00:32:09.225 16:31:08 -- host/discovery.sh@63 -- # xargs 00:32:09.225 16:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@104 -- # get_notification_count 00:32:09.225 16:31:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:09.225 16:31:08 -- host/discovery.sh@74 -- # jq '. | length' 00:32:09.225 16:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.225 16:31:08 -- common/autotest_common.sh@10 -- # set +x 00:32:09.225 16:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@74 -- # notification_count=1 00:32:09.225 16:31:08 -- host/discovery.sh@75 -- # notify_id=1 00:32:09.225 16:31:08 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:09.225 16:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.225 16:31:08 -- common/autotest_common.sh@10 -- # set +x 00:32:09.225 16:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.225 16:31:08 -- host/discovery.sh@109 -- # sleep 1 00:32:10.667 16:31:09 -- host/discovery.sh@110 -- # get_bdev_list 00:32:10.667 16:31:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.667 16:31:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:10.667 16:31:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.667 16:31:09 -- host/discovery.sh@55 -- # sort 00:32:10.667 16:31:09 -- host/discovery.sh@55 -- # xargs 00:32:10.667 16:31:09 -- common/autotest_common.sh@10 -- # set +x 00:32:10.667 16:31:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.667 16:31:09 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:10.667 16:31:09 -- host/discovery.sh@111 -- # get_notification_count 00:32:10.667 16:31:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:10.667 16:31:09 -- host/discovery.sh@74 -- # jq '. | length' 00:32:10.667 16:31:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.667 16:31:09 -- common/autotest_common.sh@10 -- # set +x 00:32:10.667 16:31:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.667 16:31:09 -- host/discovery.sh@74 -- # notification_count=1 00:32:10.667 16:31:09 -- host/discovery.sh@75 -- # notify_id=2 00:32:10.667 16:31:09 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:32:10.667 16:31:09 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:10.667 16:31:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.667 16:31:09 -- common/autotest_common.sh@10 -- # set +x 00:32:10.667 [2024-04-23 16:31:09.202435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:10.667 [2024-04-23 16:31:09.203491] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:10.667 [2024-04-23 16:31:09.203531] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:10.667 16:31:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.667 16:31:09 -- host/discovery.sh@117 -- # sleep 1 00:32:10.667 [2024-04-23 16:31:09.291557] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:10.667 [2024-04-23 16:31:09.351081] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:10.667 [2024-04-23 16:31:09.351109] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:10.667 [2024-04-23 16:31:09.351120] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:11.608 16:31:10 -- host/discovery.sh@118 -- # get_subsystem_names 00:32:11.608 16:31:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.608 16:31:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.608 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.608 16:31:10 -- host/discovery.sh@59 -- # sort 00:32:11.608 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.608 16:31:10 -- host/discovery.sh@59 -- # xargs 00:32:11.608 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@119 -- # get_bdev_list 00:32:11.608 16:31:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.608 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.608 16:31:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.608 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.608 16:31:10 -- host/discovery.sh@55 -- # sort 00:32:11.608 16:31:10 -- host/discovery.sh@55 -- # xargs 00:32:11.608 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:32:11.608 16:31:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:11.608 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.608 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.608 16:31:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:11.608 16:31:10 -- host/discovery.sh@63 -- # sort -n 00:32:11.608 16:31:10 -- host/discovery.sh@63 -- # xargs 00:32:11.608 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@121 -- # get_notification_count 00:32:11.608 16:31:10 -- host/discovery.sh@74 -- # jq '. | length' 00:32:11.608 16:31:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:11.608 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.608 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.608 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@74 -- # notification_count=0 00:32:11.608 16:31:10 -- host/discovery.sh@75 -- # notify_id=2 00:32:11.608 16:31:10 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.608 16:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.608 16:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:11.608 [2024-04-23 16:31:10.376140] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:11.608 [2024-04-23 16:31:10.376185] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:11.608 [2024-04-23 16:31:10.376755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.608 [2024-04-23 16:31:10.376777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.608 [2024-04-23 16:31:10.376789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.608 [2024-04-23 16:31:10.376798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.608 [2024-04-23 16:31:10.376807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.608 [2024-04-23 16:31:10.376816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.608 [2024-04-23 16:31:10.376825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.608 [2024-04-23 16:31:10.376832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.608 [2024-04-23 16:31:10.376841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.608 16:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:11.608 16:31:10 -- host/discovery.sh@127 -- # sleep 1 00:32:11.608 [2024-04-23 16:31:10.386738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.608 [2024-04-23 16:31:10.396754] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.608 [2024-04-23 16:31:10.397373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.397684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.397697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.608 [2024-04-23 16:31:10.397709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.608 [2024-04-23 16:31:10.397724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.608 [2024-04-23 16:31:10.397750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.608 [2024-04-23 16:31:10.397760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.608 [2024-04-23 16:31:10.397772] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.608 [2024-04-23 16:31:10.397790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.608 [2024-04-23 16:31:10.406803] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.608 [2024-04-23 16:31:10.407190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.407700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.407714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.608 [2024-04-23 16:31:10.407724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.608 [2024-04-23 16:31:10.407737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.608 [2024-04-23 16:31:10.407755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.608 [2024-04-23 16:31:10.407763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.608 [2024-04-23 16:31:10.407771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.608 [2024-04-23 16:31:10.407787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.608 [2024-04-23 16:31:10.416845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.608 [2024-04-23 16:31:10.417082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.417439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.417453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.608 [2024-04-23 16:31:10.417463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.608 [2024-04-23 16:31:10.417477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.608 [2024-04-23 16:31:10.417489] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.608 [2024-04-23 16:31:10.417497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.608 [2024-04-23 16:31:10.417506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.608 [2024-04-23 16:31:10.417524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.608 [2024-04-23 16:31:10.426883] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.608 [2024-04-23 16:31:10.427377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.427657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.427669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.608 [2024-04-23 16:31:10.427678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.608 [2024-04-23 16:31:10.427692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.608 [2024-04-23 16:31:10.427704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.608 [2024-04-23 16:31:10.427713] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.608 [2024-04-23 16:31:10.427721] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.608 [2024-04-23 16:31:10.427732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.608 [2024-04-23 16:31:10.436931] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.608 [2024-04-23 16:31:10.437302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.608 [2024-04-23 16:31:10.437491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.437502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.437511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.437525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.437538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.437546] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.437554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.437566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.446968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.447113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.447427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.447440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.447449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.447462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.447475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.447486] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.447494] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.447508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.457003] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.457532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.457745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.457757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.457766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.457779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.457800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.457808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.457817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.457829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.467045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.467587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.467979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.467991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.468000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.468013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.468031] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.468039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.468046] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.468058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.477083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.477461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.478017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.478028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.478037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.478050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.478075] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.478083] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.478091] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.478102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.487117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.487443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.487809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.487821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.487829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.487841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.487853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.487860] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.487867] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.487879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.497155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:11.609 [2024-04-23 16:31:10.497584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.498129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.609 [2024-04-23 16:31:10.498147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:11.609 [2024-04-23 16:31:10.498155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:11.609 [2024-04-23 16:31:10.498169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:11.609 [2024-04-23 16:31:10.498191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:11.609 [2024-04-23 16:31:10.498199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:11.609 [2024-04-23 16:31:10.498208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:11.609 [2024-04-23 16:31:10.498220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.609 [2024-04-23 16:31:10.505216] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:11.609 [2024-04-23 16:31:10.505247] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:12.545 16:31:11 -- host/discovery.sh@128 -- # get_subsystem_names 00:32:12.545 16:31:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.545 16:31:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.545 16:31:11 -- host/discovery.sh@59 -- # xargs 00:32:12.545 16:31:11 -- host/discovery.sh@59 -- # sort 00:32:12.545 16:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.545 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 16:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.545 16:31:11 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.545 16:31:11 -- host/discovery.sh@129 -- # get_bdev_list 00:32:12.545 16:31:11 -- host/discovery.sh@55 -- # xargs 00:32:12.545 16:31:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.545 16:31:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:12.545 16:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.545 16:31:11 -- host/discovery.sh@55 -- # sort 00:32:12.545 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 16:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.545 16:31:11 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:12.545 16:31:11 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:32:12.545 16:31:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:12.545 16:31:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:12.545 16:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.545 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 16:31:11 -- host/discovery.sh@63 -- # sort -n 00:32:12.545 16:31:11 -- host/discovery.sh@63 -- # xargs 00:32:12.545 16:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.803 16:31:11 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:32:12.803 16:31:11 -- host/discovery.sh@131 -- # get_notification_count 00:32:12.803 16:31:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:12.803 16:31:11 -- host/discovery.sh@74 -- # jq '. | length' 00:32:12.803 16:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.803 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:12.803 16:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.803 16:31:11 -- host/discovery.sh@74 -- # notification_count=0 00:32:12.803 16:31:11 -- host/discovery.sh@75 -- # notify_id=2 00:32:12.803 16:31:11 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:32:12.803 16:31:11 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:12.803 16:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.803 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:12.803 16:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:12.803 16:31:11 -- host/discovery.sh@135 -- # sleep 1 00:32:13.739 16:31:12 -- host/discovery.sh@136 -- # get_subsystem_names 00:32:13.739 16:31:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.739 16:31:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.739 16:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.739 16:31:12 -- host/discovery.sh@59 -- # xargs 00:32:13.739 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.739 16:31:12 -- host/discovery.sh@59 -- # sort 00:32:13.739 16:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.739 16:31:12 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:32:13.739 16:31:12 -- host/discovery.sh@137 -- # get_bdev_list 00:32:13.739 16:31:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.739 16:31:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.739 16:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.739 16:31:12 -- host/discovery.sh@55 -- # sort 00:32:13.739 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.739 16:31:12 -- host/discovery.sh@55 -- # xargs 00:32:13.739 16:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.739 16:31:12 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:32:13.739 16:31:12 -- host/discovery.sh@138 -- # get_notification_count 00:32:13.739 16:31:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:13.739 16:31:12 -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.739 16:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.739 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:32:13.739 16:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.739 16:31:12 -- host/discovery.sh@74 -- # notification_count=2 00:32:13.739 16:31:12 -- host/discovery.sh@75 -- # notify_id=4 00:32:13.739 16:31:12 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:32:13.739 16:31:12 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:13.739 16:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.739 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:32:15.114 [2024-04-23 16:31:13.720357] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:15.114 [2024-04-23 16:31:13.720383] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:15.114 [2024-04-23 16:31:13.720401] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:15.114 [2024-04-23 16:31:13.808462] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:15.371 [2024-04-23 16:31:14.120108] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:15.372 [2024-04-23 16:31:14.120149] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.372 16:31:14 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@640 -- # local es=0 00:32:15.372 16:31:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.372 16:31:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 request: 00:32:15.372 { 00:32:15.372 "name": "nvme", 00:32:15.372 "trtype": "tcp", 00:32:15.372 "traddr": "10.0.0.2", 00:32:15.372 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:15.372 "adrfam": "ipv4", 00:32:15.372 "trsvcid": "8009", 00:32:15.372 "wait_for_attach": true, 00:32:15.372 "method": "bdev_nvme_start_discovery", 00:32:15.372 "req_id": 1 00:32:15.372 } 00:32:15.372 Got JSON-RPC error response 00:32:15.372 response: 00:32:15.372 { 00:32:15.372 "code": -17, 00:32:15.372 "message": "File exists" 00:32:15.372 } 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:15.372 16:31:14 -- common/autotest_common.sh@643 -- # es=1 00:32:15.372 16:31:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:15.372 16:31:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:15.372 16:31:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:15.372 16:31:14 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # sort 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # xargs 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.372 16:31:14 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:32:15.372 16:31:14 -- host/discovery.sh@147 -- # get_bdev_list 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # sort 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # xargs 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.372 16:31:14 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.372 16:31:14 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@640 -- # local es=0 00:32:15.372 16:31:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:15.372 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.372 16:31:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 request: 00:32:15.372 { 00:32:15.372 "name": "nvme_second", 00:32:15.372 "trtype": "tcp", 00:32:15.372 "traddr": "10.0.0.2", 00:32:15.372 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:15.372 "adrfam": "ipv4", 00:32:15.372 "trsvcid": "8009", 00:32:15.372 "wait_for_attach": true, 00:32:15.372 "method": "bdev_nvme_start_discovery", 00:32:15.372 "req_id": 1 00:32:15.372 } 00:32:15.372 Got JSON-RPC error response 00:32:15.372 response: 00:32:15.372 { 00:32:15.372 "code": -17, 00:32:15.372 "message": "File exists" 00:32:15.372 } 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:15.372 16:31:14 -- common/autotest_common.sh@643 -- # es=1 00:32:15.372 16:31:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:15.372 16:31:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:15.372 16:31:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:15.372 16:31:14 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # sort 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 16:31:14 -- host/discovery.sh@67 -- # xargs 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.372 16:31:14 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:32:15.372 16:31:14 -- host/discovery.sh@153 -- # get_bdev_list 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # sort 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.372 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.372 16:31:14 -- host/discovery.sh@55 -- # xargs 00:32:15.372 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:15.372 16:31:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.631 16:31:14 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:15.631 16:31:14 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.631 16:31:14 -- common/autotest_common.sh@640 -- # local es=0 00:32:15.631 16:31:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.631 16:31:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:32:15.631 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.631 16:31:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:32:15.631 16:31:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:32:15.631 16:31:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:15.631 16:31:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.631 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:32:16.567 [2024-04-23 16:31:15.312933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.567 [2024-04-23 16:31:15.313467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.567 [2024-04-23 16:31:15.313485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006240 with addr=10.0.0.2, port=8010 00:32:16.567 [2024-04-23 16:31:15.313518] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:16.567 [2024-04-23 16:31:15.313529] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:16.567 [2024-04-23 16:31:15.313542] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:17.501 [2024-04-23 16:31:16.312941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.501 [2024-04-23 16:31:16.313461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.501 [2024-04-23 16:31:16.313474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000006400 with addr=10.0.0.2, port=8010 00:32:17.501 [2024-04-23 16:31:16.313501] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:17.501 [2024-04-23 16:31:16.313509] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:17.501 [2024-04-23 16:31:16.313521] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:18.438 [2024-04-23 16:31:17.312354] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:18.438 request: 00:32:18.438 { 00:32:18.438 "name": "nvme_second", 00:32:18.438 "trtype": "tcp", 00:32:18.438 "traddr": "10.0.0.2", 00:32:18.438 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.438 "adrfam": "ipv4", 00:32:18.438 "trsvcid": "8010", 00:32:18.438 "attach_timeout_ms": 3000, 00:32:18.438 "method": "bdev_nvme_start_discovery", 00:32:18.438 "req_id": 1 00:32:18.438 } 00:32:18.438 Got JSON-RPC error response 00:32:18.438 response: 00:32:18.438 { 00:32:18.438 "code": -110, 00:32:18.438 "message": "Connection timed out" 00:32:18.438 } 00:32:18.438 16:31:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:32:18.438 16:31:17 -- common/autotest_common.sh@643 -- # es=1 00:32:18.438 16:31:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:32:18.438 16:31:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:32:18.438 16:31:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:32:18.438 16:31:17 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:32:18.438 16:31:17 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.438 16:31:17 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.438 16:31:17 -- host/discovery.sh@67 -- # xargs 00:32:18.438 16:31:17 -- host/discovery.sh@67 -- # sort 00:32:18.438 16:31:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.438 16:31:17 -- common/autotest_common.sh@10 -- # set +x 00:32:18.438 16:31:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:18.438 16:31:17 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:32:18.438 16:31:17 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:32:18.438 16:31:17 -- host/discovery.sh@162 -- # kill 3294906 00:32:18.438 16:31:17 -- host/discovery.sh@163 -- # nvmftestfini 00:32:18.438 16:31:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:18.438 16:31:17 -- nvmf/common.sh@116 -- # sync 00:32:18.438 16:31:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:18.438 16:31:17 -- nvmf/common.sh@119 -- # set +e 00:32:18.438 16:31:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:18.438 16:31:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:18.438 rmmod nvme_tcp 00:32:18.698 rmmod nvme_fabrics 00:32:18.698 rmmod nvme_keyring 00:32:18.698 16:31:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:18.698 16:31:17 -- nvmf/common.sh@123 -- # set -e 00:32:18.698 16:31:17 -- nvmf/common.sh@124 -- # return 0 00:32:18.698 16:31:17 -- nvmf/common.sh@477 -- # '[' -n 3294851 ']' 00:32:18.698 16:31:17 -- nvmf/common.sh@478 -- # killprocess 3294851 00:32:18.698 16:31:17 -- common/autotest_common.sh@926 -- # '[' -z 3294851 ']' 00:32:18.698 16:31:17 -- common/autotest_common.sh@930 -- # kill -0 3294851 00:32:18.698 16:31:17 -- common/autotest_common.sh@931 -- # uname 00:32:18.698 16:31:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:18.698 16:31:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3294851 00:32:18.698 16:31:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:18.698 16:31:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:18.698 16:31:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3294851' 00:32:18.698 killing process with pid 3294851 00:32:18.698 16:31:17 -- common/autotest_common.sh@945 -- # kill 3294851 00:32:18.698 16:31:17 -- common/autotest_common.sh@950 -- # wait 3294851 00:32:19.265 16:31:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:19.265 16:31:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:19.265 16:31:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:19.265 16:31:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.265 16:31:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:19.265 16:31:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.265 16:31:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.265 16:31:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.169 16:31:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:21.169 00:32:21.169 real 0m20.665s 00:32:21.169 user 0m27.344s 00:32:21.169 sys 0m5.390s 00:32:21.169 16:31:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.169 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.169 ************************************ 00:32:21.169 END TEST nvmf_discovery 00:32:21.169 ************************************ 00:32:21.169 16:31:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:21.169 16:31:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:21.169 16:31:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.169 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:32:21.169 ************************************ 00:32:21.169 START TEST nvmf_discovery_remove_ifc 00:32:21.169 ************************************ 00:32:21.169 16:31:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:21.170 * Looking for test storage... 00:32:21.170 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.170 16:31:20 -- nvmf/common.sh@7 -- # uname -s 00:32:21.170 16:31:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.170 16:31:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.170 16:31:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.170 16:31:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.170 16:31:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.170 16:31:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.170 16:31:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.170 16:31:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.170 16:31:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.170 16:31:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.170 16:31:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:21.170 16:31:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:21.170 16:31:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.170 16:31:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.170 16:31:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:21.170 16:31:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:21.170 16:31:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.170 16:31:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.170 16:31:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.170 16:31:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.170 16:31:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.170 16:31:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.170 16:31:20 -- paths/export.sh@5 -- # export PATH 00:32:21.170 16:31:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.170 16:31:20 -- nvmf/common.sh@46 -- # : 0 00:32:21.170 16:31:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:21.170 16:31:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:21.170 16:31:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:21.170 16:31:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.170 16:31:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.170 16:31:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:21.170 16:31:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:21.170 16:31:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:21.170 16:31:20 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:21.170 16:31:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:21.170 16:31:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.170 16:31:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:21.170 16:31:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:21.170 16:31:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:21.170 16:31:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.170 16:31:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.170 16:31:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.170 16:31:20 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:21.170 16:31:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:21.170 16:31:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:21.170 16:31:20 -- common/autotest_common.sh@10 -- # set +x 00:32:26.442 16:31:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:26.442 16:31:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:26.442 16:31:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:26.442 16:31:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:26.442 16:31:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:26.442 16:31:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:26.442 16:31:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:26.442 16:31:25 -- nvmf/common.sh@294 -- # net_devs=() 00:32:26.442 16:31:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:26.442 16:31:25 -- nvmf/common.sh@295 -- # e810=() 00:32:26.442 16:31:25 -- nvmf/common.sh@295 -- # local -ga e810 00:32:26.442 16:31:25 -- nvmf/common.sh@296 -- # x722=() 00:32:26.442 16:31:25 -- nvmf/common.sh@296 -- # local -ga x722 00:32:26.442 16:31:25 -- nvmf/common.sh@297 -- # mlx=() 00:32:26.442 16:31:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:26.442 16:31:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.442 16:31:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:26.442 16:31:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:26.442 16:31:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:26.442 16:31:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:26.442 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:26.442 16:31:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:26.442 16:31:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:26.442 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:26.442 16:31:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:26.442 16:31:25 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:26.442 16:31:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.442 16:31:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:26.442 16:31:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.442 16:31:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:26.442 Found net devices under 0000:27:00.0: cvl_0_0 00:32:26.442 16:31:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.442 16:31:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:26.442 16:31:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.442 16:31:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:26.442 16:31:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.442 16:31:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:26.442 Found net devices under 0000:27:00.1: cvl_0_1 00:32:26.442 16:31:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.442 16:31:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:26.442 16:31:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:26.442 16:31:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:26.442 16:31:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:26.442 16:31:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.443 16:31:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.443 16:31:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.443 16:31:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:26.443 16:31:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.443 16:31:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.443 16:31:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:26.443 16:31:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.443 16:31:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.443 16:31:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:26.443 16:31:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:26.443 16:31:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.443 16:31:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.443 16:31:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.443 16:31:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.443 16:31:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:26.443 16:31:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.703 16:31:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.703 16:31:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.703 16:31:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:26.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:32:26.703 00:32:26.703 --- 10.0.0.2 ping statistics --- 00:32:26.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.703 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:32:26.703 16:31:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:32:26.703 00:32:26.703 --- 10.0.0.1 ping statistics --- 00:32:26.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.703 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:32:26.703 16:31:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.703 16:31:25 -- nvmf/common.sh@410 -- # return 0 00:32:26.703 16:31:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:26.703 16:31:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.703 16:31:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:26.703 16:31:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:26.703 16:31:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.703 16:31:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:26.703 16:31:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:26.703 16:31:25 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:26.703 16:31:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:26.703 16:31:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:26.703 16:31:25 -- common/autotest_common.sh@10 -- # set +x 00:32:26.703 16:31:25 -- nvmf/common.sh@469 -- # nvmfpid=3301346 00:32:26.703 16:31:25 -- nvmf/common.sh@470 -- # waitforlisten 3301346 00:32:26.703 16:31:25 -- common/autotest_common.sh@819 -- # '[' -z 3301346 ']' 00:32:26.703 16:31:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.703 16:31:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:26.703 16:31:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.703 16:31:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:26.703 16:31:25 -- common/autotest_common.sh@10 -- # set +x 00:32:26.703 16:31:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:26.703 [2024-04-23 16:31:25.617275] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:26.703 [2024-04-23 16:31:25.617408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.964 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.964 [2024-04-23 16:31:25.764392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.964 [2024-04-23 16:31:25.867234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:26.964 [2024-04-23 16:31:25.867468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.964 [2024-04-23 16:31:25.867486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.964 [2024-04-23 16:31:25.867496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.964 [2024-04-23 16:31:25.867525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.532 16:31:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:27.532 16:31:26 -- common/autotest_common.sh@852 -- # return 0 00:32:27.532 16:31:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:27.532 16:31:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:27.532 16:31:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.532 16:31:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.532 16:31:26 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:27.532 16:31:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:27.532 16:31:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.532 [2024-04-23 16:31:26.354644] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.532 [2024-04-23 16:31:26.362828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:27.532 null0 00:32:27.532 [2024-04-23 16:31:26.394742] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.532 16:31:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:27.532 16:31:26 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3301519 00:32:27.532 16:31:26 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3301519 /tmp/host.sock 00:32:27.532 16:31:26 -- common/autotest_common.sh@819 -- # '[' -z 3301519 ']' 00:32:27.532 16:31:26 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:27.532 16:31:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:32:27.532 16:31:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:27.532 16:31:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:27.532 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:27.532 16:31:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:27.532 16:31:26 -- common/autotest_common.sh@10 -- # set +x 00:32:27.789 [2024-04-23 16:31:26.490477] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:27.789 [2024-04-23 16:31:26.490655] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3301519 ] 00:32:27.789 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.789 [2024-04-23 16:31:26.604506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.789 [2024-04-23 16:31:26.694773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:27.789 [2024-04-23 16:31:26.694953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.355 16:31:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:28.355 16:31:27 -- common/autotest_common.sh@852 -- # return 0 00:32:28.355 16:31:27 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.355 16:31:27 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:28.355 16:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.355 16:31:27 -- common/autotest_common.sh@10 -- # set +x 00:32:28.355 16:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:28.355 16:31:27 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:28.355 16:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.355 16:31:27 -- common/autotest_common.sh@10 -- # set +x 00:32:28.613 16:31:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:28.613 16:31:27 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:28.613 16:31:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:28.613 16:31:27 -- common/autotest_common.sh@10 -- # set +x 00:32:29.550 [2024-04-23 16:31:28.379520] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:29.550 [2024-04-23 16:31:28.379554] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:29.550 [2024-04-23 16:31:28.379575] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:29.808 [2024-04-23 16:31:28.509665] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:29.808 [2024-04-23 16:31:28.690619] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:29.808 [2024-04-23 16:31:28.690681] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:29.808 [2024-04-23 16:31:28.690719] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:29.808 [2024-04-23 16:31:28.690741] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:29.808 [2024-04-23 16:31:28.690768] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:29.808 16:31:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.808 [2024-04-23 16:31:28.697051] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000003f40 was disconnected and freed. delete nvme_qpair. 00:32:29.808 16:31:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:29.808 16:31:28 -- common/autotest_common.sh@10 -- # set +x 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.808 16:31:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:29.808 16:31:28 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:30.066 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:30.067 16:31:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:30.067 16:31:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.067 16:31:28 -- common/autotest_common.sh@10 -- # set +x 00:32:30.067 16:31:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.067 16:31:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:30.067 16:31:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:31.002 16:31:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:31.002 16:31:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:31.002 16:31:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:31.002 16:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.002 16:31:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:31.002 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:32:31.002 16:31:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:31.262 16:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.262 16:31:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:31.262 16:31:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:32.199 16:31:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:32.199 16:31:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:32.199 16:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.199 16:31:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:32.199 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:32:32.199 16:31:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:32.199 16:31:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:32.199 16:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.199 16:31:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:32.199 16:31:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:33.134 16:31:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:33.134 16:31:32 -- common/autotest_common.sh@10 -- # set +x 00:32:33.134 16:31:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:33.134 16:31:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:34.512 16:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.512 16:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:34.512 16:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:34.512 16:31:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:35.447 16:31:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.447 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:32:35.447 [2024-04-23 16:31:34.118872] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:35.447 [2024-04-23 16:31:34.118941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.447 [2024-04-23 16:31:34.118957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.447 [2024-04-23 16:31:34.118971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.447 [2024-04-23 16:31:34.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.447 [2024-04-23 16:31:34.118989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.447 [2024-04-23 16:31:34.118997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.447 [2024-04-23 16:31:34.119006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.447 [2024-04-23 16:31:34.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.447 [2024-04-23 16:31:34.119023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:35.447 [2024-04-23 16:31:34.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.447 [2024-04-23 16:31:34.119041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:35.447 16:31:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.447 [2024-04-23 16:31:34.128865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:35.447 [2024-04-23 16:31:34.138885] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:35.447 16:31:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:36.386 16:31:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:36.386 16:31:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.386 16:31:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:36.386 16:31:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:36.386 16:31:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:36.386 16:31:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:36.386 16:31:35 -- common/autotest_common.sh@10 -- # set +x 00:32:36.386 [2024-04-23 16:31:35.177689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:37.320 [2024-04-23 16:31:36.200675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:37.320 [2024-04-23 16:31:36.200748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000003680 with addr=10.0.0.2, port=4420 00:32:37.320 [2024-04-23 16:31:36.200773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613000003680 is same with the state(5) to be set 00:32:37.320 [2024-04-23 16:31:36.201388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613000003680 (9): Bad file descriptor 00:32:37.320 [2024-04-23 16:31:36.201427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.320 [2024-04-23 16:31:36.201474] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:37.320 [2024-04-23 16:31:36.201513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.320 [2024-04-23 16:31:36.201539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.320 [2024-04-23 16:31:36.201560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.320 [2024-04-23 16:31:36.201575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.320 [2024-04-23 16:31:36.201591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.320 [2024-04-23 16:31:36.201604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.320 [2024-04-23 16:31:36.201620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.320 [2024-04-23 16:31:36.201658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.320 [2024-04-23 16:31:36.201674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.320 [2024-04-23 16:31:36.201689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.320 [2024-04-23 16:31:36.201704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:37.320 [2024-04-23 16:31:36.201814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130000034c0 (9): Bad file descriptor 00:32:37.320 [2024-04-23 16:31:36.202875] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:37.320 [2024-04-23 16:31:36.202893] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:37.320 16:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.320 16:31:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:37.320 16:31:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.701 16:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.701 16:31:37 -- common/autotest_common.sh@10 -- # set +x 00:32:38.701 16:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:38.701 16:31:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.701 16:31:37 -- common/autotest_common.sh@10 -- # set +x 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:38.701 16:31:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:38.701 16:31:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:39.635 [2024-04-23 16:31:38.251428] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.635 [2024-04-23 16:31:38.251456] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.635 [2024-04-23 16:31:38.251475] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.635 [2024-04-23 16:31:38.381580] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:39.635 16:31:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:39.635 16:31:38 -- common/autotest_common.sh@10 -- # set +x 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:39.635 16:31:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:39.635 16:31:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:39.895 [2024-04-23 16:31:38.604750] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:39.895 [2024-04-23 16:31:38.604800] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:39.895 [2024-04-23 16:31:38.604830] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:39.895 [2024-04-23 16:31:38.604849] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:39.895 [2024-04-23 16:31:38.604861] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:39.896 [2024-04-23 16:31:38.609222] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x613000004d40 was disconnected and freed. delete nvme_qpair. 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:40.834 16:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:40.834 16:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:40.834 16:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:40.834 16:31:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3301519 00:32:40.834 16:31:39 -- common/autotest_common.sh@926 -- # '[' -z 3301519 ']' 00:32:40.834 16:31:39 -- common/autotest_common.sh@930 -- # kill -0 3301519 00:32:40.834 16:31:39 -- common/autotest_common.sh@931 -- # uname 00:32:40.834 16:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:40.834 16:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3301519 00:32:40.834 16:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:40.834 16:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:40.834 16:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3301519' 00:32:40.834 killing process with pid 3301519 00:32:40.834 16:31:39 -- common/autotest_common.sh@945 -- # kill 3301519 00:32:40.834 16:31:39 -- common/autotest_common.sh@950 -- # wait 3301519 00:32:41.092 16:31:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:41.092 16:31:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:41.092 16:31:39 -- nvmf/common.sh@116 -- # sync 00:32:41.092 16:31:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:41.092 16:31:39 -- nvmf/common.sh@119 -- # set +e 00:32:41.092 16:31:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:41.092 16:31:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:41.092 rmmod nvme_tcp 00:32:41.092 rmmod nvme_fabrics 00:32:41.092 rmmod nvme_keyring 00:32:41.092 16:31:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:41.092 16:31:39 -- nvmf/common.sh@123 -- # set -e 00:32:41.093 16:31:39 -- nvmf/common.sh@124 -- # return 0 00:32:41.093 16:31:39 -- nvmf/common.sh@477 -- # '[' -n 3301346 ']' 00:32:41.093 16:31:39 -- nvmf/common.sh@478 -- # killprocess 3301346 00:32:41.093 16:31:39 -- common/autotest_common.sh@926 -- # '[' -z 3301346 ']' 00:32:41.093 16:31:39 -- common/autotest_common.sh@930 -- # kill -0 3301346 00:32:41.093 16:31:39 -- common/autotest_common.sh@931 -- # uname 00:32:41.093 16:31:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:41.093 16:31:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3301346 00:32:41.093 16:31:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:41.093 16:31:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:41.093 16:31:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3301346' 00:32:41.093 killing process with pid 3301346 00:32:41.093 16:31:39 -- common/autotest_common.sh@945 -- # kill 3301346 00:32:41.093 16:31:39 -- common/autotest_common.sh@950 -- # wait 3301346 00:32:41.658 16:31:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:41.658 16:31:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:41.658 16:31:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:41.658 16:31:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:41.658 16:31:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:41.658 16:31:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.658 16:31:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.659 16:31:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.195 16:31:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:44.195 00:32:44.195 real 0m22.535s 00:32:44.195 user 0m28.013s 00:32:44.195 sys 0m5.274s 00:32:44.195 16:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.195 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:32:44.195 ************************************ 00:32:44.195 END TEST nvmf_discovery_remove_ifc 00:32:44.195 ************************************ 00:32:44.195 16:31:42 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:32:44.195 16:31:42 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:44.195 16:31:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:44.195 16:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:44.195 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:32:44.195 ************************************ 00:32:44.195 START TEST nvmf_digest 00:32:44.195 ************************************ 00:32:44.195 16:31:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:44.195 * Looking for test storage... 00:32:44.195 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:32:44.195 16:31:42 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.195 16:31:42 -- nvmf/common.sh@7 -- # uname -s 00:32:44.195 16:31:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.195 16:31:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.195 16:31:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.195 16:31:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.195 16:31:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.195 16:31:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.195 16:31:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.195 16:31:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.195 16:31:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.195 16:31:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.195 16:31:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:44.195 16:31:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:44.195 16:31:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.195 16:31:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.195 16:31:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:44.195 16:31:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:44.195 16:31:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.195 16:31:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.195 16:31:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.195 16:31:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.195 16:31:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.196 16:31:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.196 16:31:42 -- paths/export.sh@5 -- # export PATH 00:32:44.196 16:31:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.196 16:31:42 -- nvmf/common.sh@46 -- # : 0 00:32:44.196 16:31:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:44.196 16:31:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:44.196 16:31:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:44.196 16:31:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.196 16:31:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.196 16:31:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:44.196 16:31:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:44.196 16:31:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:44.196 16:31:42 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:44.196 16:31:42 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:44.196 16:31:42 -- host/digest.sh@16 -- # runtime=2 00:32:44.196 16:31:42 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:32:44.196 16:31:42 -- host/digest.sh@132 -- # nvmftestinit 00:32:44.196 16:31:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:44.196 16:31:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.196 16:31:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:44.196 16:31:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:44.196 16:31:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:44.196 16:31:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.196 16:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:44.196 16:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.196 16:31:42 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:32:44.196 16:31:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:44.196 16:31:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:44.196 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:32:49.477 16:31:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:49.477 16:31:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:49.477 16:31:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:49.477 16:31:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:49.477 16:31:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:49.477 16:31:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:49.477 16:31:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:49.477 16:31:47 -- nvmf/common.sh@294 -- # net_devs=() 00:32:49.477 16:31:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:49.477 16:31:47 -- nvmf/common.sh@295 -- # e810=() 00:32:49.477 16:31:47 -- nvmf/common.sh@295 -- # local -ga e810 00:32:49.477 16:31:47 -- nvmf/common.sh@296 -- # x722=() 00:32:49.477 16:31:47 -- nvmf/common.sh@296 -- # local -ga x722 00:32:49.477 16:31:47 -- nvmf/common.sh@297 -- # mlx=() 00:32:49.477 16:31:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:49.477 16:31:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.477 16:31:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:49.478 16:31:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:49.478 16:31:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:49.478 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:49.478 16:31:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:49.478 16:31:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:49.478 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:49.478 16:31:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:49.478 16:31:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.478 16:31:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.478 16:31:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:49.478 Found net devices under 0000:27:00.0: cvl_0_0 00:32:49.478 16:31:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.478 16:31:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:49.478 16:31:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.478 16:31:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.478 16:31:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:49.478 Found net devices under 0000:27:00.1: cvl_0_1 00:32:49.478 16:31:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.478 16:31:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:49.478 16:31:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:49.478 16:31:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:49.478 16:31:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.478 16:31:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.478 16:31:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.478 16:31:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:49.478 16:31:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.478 16:31:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.478 16:31:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:49.478 16:31:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.478 16:31:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.478 16:31:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:49.478 16:31:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:49.478 16:31:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.478 16:31:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.478 16:31:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.478 16:31:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.478 16:31:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:49.478 16:31:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.478 16:31:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.478 16:31:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.478 16:31:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:49.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:32:49.478 00:32:49.478 --- 10.0.0.2 ping statistics --- 00:32:49.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.478 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:32:49.478 16:31:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:32:49.478 00:32:49.478 --- 10.0.0.1 ping statistics --- 00:32:49.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.478 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:32:49.478 16:31:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.478 16:31:48 -- nvmf/common.sh@410 -- # return 0 00:32:49.478 16:31:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:49.478 16:31:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.478 16:31:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:49.478 16:31:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:49.478 16:31:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.478 16:31:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:49.478 16:31:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:49.478 16:31:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:49.478 16:31:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:49.478 16:31:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:49.478 16:31:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:49.478 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:32:49.478 ************************************ 00:32:49.478 START TEST nvmf_digest_clean 00:32:49.478 ************************************ 00:32:49.478 16:31:48 -- common/autotest_common.sh@1104 -- # run_digest 00:32:49.478 16:31:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:49.478 16:31:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:49.478 16:31:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:49.478 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:32:49.478 16:31:48 -- nvmf/common.sh@469 -- # nvmfpid=3308217 00:32:49.478 16:31:48 -- nvmf/common.sh@470 -- # waitforlisten 3308217 00:32:49.478 16:31:48 -- common/autotest_common.sh@819 -- # '[' -z 3308217 ']' 00:32:49.478 16:31:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.478 16:31:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:49.478 16:31:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.478 16:31:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:49.478 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:32:49.478 16:31:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:49.478 [2024-04-23 16:31:48.273211] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:49.478 [2024-04-23 16:31:48.273349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.478 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.737 [2024-04-23 16:31:48.414156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.737 [2024-04-23 16:31:48.506687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:49.737 [2024-04-23 16:31:48.506865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.737 [2024-04-23 16:31:48.506879] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.737 [2024-04-23 16:31:48.506888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.737 [2024-04-23 16:31:48.506918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.304 16:31:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:50.304 16:31:48 -- common/autotest_common.sh@852 -- # return 0 00:32:50.304 16:31:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:50.304 16:31:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:50.304 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:32:50.304 16:31:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.304 16:31:48 -- host/digest.sh@120 -- # common_target_config 00:32:50.304 16:31:48 -- host/digest.sh@43 -- # rpc_cmd 00:32:50.304 16:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:50.304 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:32:50.304 null0 00:32:50.304 [2024-04-23 16:31:49.146151] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.304 [2024-04-23 16:31:49.170260] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:50.304 16:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:50.304 16:31:49 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:50.304 16:31:49 -- host/digest.sh@77 -- # local rw bs qd 00:32:50.304 16:31:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:50.304 16:31:49 -- host/digest.sh@80 -- # rw=randread 00:32:50.304 16:31:49 -- host/digest.sh@80 -- # bs=4096 00:32:50.304 16:31:49 -- host/digest.sh@80 -- # qd=128 00:32:50.304 16:31:49 -- host/digest.sh@82 -- # bperfpid=3308348 00:32:50.304 16:31:49 -- host/digest.sh@83 -- # waitforlisten 3308348 /var/tmp/bperf.sock 00:32:50.304 16:31:49 -- common/autotest_common.sh@819 -- # '[' -z 3308348 ']' 00:32:50.304 16:31:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:50.304 16:31:49 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:50.304 16:31:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:50.304 16:31:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:50.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:50.304 16:31:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:50.304 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:32:50.562 [2024-04-23 16:31:49.243410] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:32:50.562 [2024-04-23 16:31:49.243514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308348 ] 00:32:50.562 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.562 [2024-04-23 16:31:49.355584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.562 [2024-04-23 16:31:49.446357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.132 16:31:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:51.132 16:31:49 -- common/autotest_common.sh@852 -- # return 0 00:32:51.132 16:31:49 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:32:51.132 16:31:49 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:32:51.132 16:31:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:32:51.393 [2024-04-23 16:31:50.074849] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:32:51.393 16:31:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:51.393 16:31:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:56.819 16:31:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.819 16:31:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.819 nvme0n1 00:32:56.819 16:31:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:56.819 16:31:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:56.819 Running I/O for 2 seconds... 00:32:58.726 00:32:58.726 Latency(us) 00:32:58.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:58.726 nvme0n1 : 2.04 21337.68 83.35 0.00 0.00 5876.07 2009.20 46358.10 00:32:58.726 =================================================================================================================== 00:32:58.726 Total : 21337.68 83.35 0.00 0.00 5876.07 2009.20 46358.10 00:32:58.726 0 00:32:58.985 16:31:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:58.985 16:31:57 -- host/digest.sh@92 -- # get_accel_stats 00:32:58.985 16:31:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:58.985 16:31:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:58.985 16:31:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:58.985 | select(.opcode=="crc32c") 00:32:58.985 | "\(.module_name) \(.executed)"' 00:32:58.985 16:31:57 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:32:58.985 16:31:57 -- host/digest.sh@93 -- # exp_module=dsa 00:32:58.985 16:31:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:58.985 16:31:57 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:32:58.985 16:31:57 -- host/digest.sh@97 -- # killprocess 3308348 00:32:58.985 16:31:57 -- common/autotest_common.sh@926 -- # '[' -z 3308348 ']' 00:32:58.985 16:31:57 -- common/autotest_common.sh@930 -- # kill -0 3308348 00:32:58.985 16:31:57 -- common/autotest_common.sh@931 -- # uname 00:32:58.985 16:31:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:58.985 16:31:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3308348 00:32:58.985 16:31:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:58.985 16:31:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:58.985 16:31:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3308348' 00:32:58.985 killing process with pid 3308348 00:32:58.985 16:31:57 -- common/autotest_common.sh@945 -- # kill 3308348 00:32:58.985 Received shutdown signal, test time was about 2.000000 seconds 00:32:58.985 00:32:58.985 Latency(us) 00:32:58.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.985 =================================================================================================================== 00:32:58.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.985 16:31:57 -- common/autotest_common.sh@950 -- # wait 3308348 00:33:00.363 16:31:59 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:33:00.363 16:31:59 -- host/digest.sh@77 -- # local rw bs qd 00:33:00.363 16:31:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:00.363 16:31:59 -- host/digest.sh@80 -- # rw=randread 00:33:00.363 16:31:59 -- host/digest.sh@80 -- # bs=131072 00:33:00.363 16:31:59 -- host/digest.sh@80 -- # qd=16 00:33:00.363 16:31:59 -- host/digest.sh@82 -- # bperfpid=3310442 00:33:00.363 16:31:59 -- host/digest.sh@83 -- # waitforlisten 3310442 /var/tmp/bperf.sock 00:33:00.363 16:31:59 -- common/autotest_common.sh@819 -- # '[' -z 3310442 ']' 00:33:00.363 16:31:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.363 16:31:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:00.363 16:31:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.363 16:31:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:00.363 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:33:00.363 16:31:59 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:00.622 [2024-04-23 16:31:59.326872] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:00.622 [2024-04-23 16:31:59.326996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310442 ] 00:33:00.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.622 Zero copy mechanism will not be used. 00:33:00.622 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.622 [2024-04-23 16:31:59.443493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.622 [2024-04-23 16:31:59.539391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.188 16:32:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:01.188 16:32:00 -- common/autotest_common.sh@852 -- # return 0 00:33:01.188 16:32:00 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:01.188 16:32:00 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:01.188 16:32:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:01.447 [2024-04-23 16:32:00.159986] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:01.447 16:32:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:01.447 16:32:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:06.717 16:32:05 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.717 16:32:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:06.977 nvme0n1 00:33:06.977 16:32:05 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:06.977 16:32:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:06.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:06.977 Zero copy mechanism will not be used. 00:33:06.977 Running I/O for 2 seconds... 00:33:09.510 00:33:09.510 Latency(us) 00:33:09.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:09.510 nvme0n1 : 2.00 4462.69 557.84 0.00 0.00 3582.79 3018.11 8588.67 00:33:09.510 =================================================================================================================== 00:33:09.510 Total : 4462.69 557.84 0.00 0.00 3582.79 3018.11 8588.67 00:33:09.510 0 00:33:09.510 16:32:07 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:09.510 16:32:07 -- host/digest.sh@92 -- # get_accel_stats 00:33:09.510 16:32:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:09.510 16:32:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:09.510 | select(.opcode=="crc32c") 00:33:09.510 | "\(.module_name) \(.executed)"' 00:33:09.510 16:32:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:09.510 16:32:07 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:09.510 16:32:07 -- host/digest.sh@93 -- # exp_module=dsa 00:33:09.510 16:32:07 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:09.510 16:32:07 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:09.510 16:32:07 -- host/digest.sh@97 -- # killprocess 3310442 00:33:09.510 16:32:07 -- common/autotest_common.sh@926 -- # '[' -z 3310442 ']' 00:33:09.510 16:32:07 -- common/autotest_common.sh@930 -- # kill -0 3310442 00:33:09.510 16:32:07 -- common/autotest_common.sh@931 -- # uname 00:33:09.510 16:32:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:09.510 16:32:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3310442 00:33:09.510 16:32:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:09.510 16:32:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:09.510 16:32:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3310442' 00:33:09.510 killing process with pid 3310442 00:33:09.510 16:32:08 -- common/autotest_common.sh@945 -- # kill 3310442 00:33:09.510 Received shutdown signal, test time was about 2.000000 seconds 00:33:09.510 00:33:09.510 Latency(us) 00:33:09.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.510 =================================================================================================================== 00:33:09.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.510 16:32:08 -- common/autotest_common.sh@950 -- # wait 3310442 00:33:10.888 16:32:09 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:33:10.888 16:32:09 -- host/digest.sh@77 -- # local rw bs qd 00:33:10.888 16:32:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:10.888 16:32:09 -- host/digest.sh@80 -- # rw=randwrite 00:33:10.888 16:32:09 -- host/digest.sh@80 -- # bs=4096 00:33:10.888 16:32:09 -- host/digest.sh@80 -- # qd=128 00:33:10.888 16:32:09 -- host/digest.sh@82 -- # bperfpid=3312251 00:33:10.888 16:32:09 -- host/digest.sh@83 -- # waitforlisten 3312251 /var/tmp/bperf.sock 00:33:10.888 16:32:09 -- common/autotest_common.sh@819 -- # '[' -z 3312251 ']' 00:33:10.888 16:32:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:10.888 16:32:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:10.888 16:32:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:10.888 16:32:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:10.888 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:33:10.888 16:32:09 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:10.888 [2024-04-23 16:32:09.502447] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:10.888 [2024-04-23 16:32:09.502563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312251 ] 00:33:10.888 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.888 [2024-04-23 16:32:09.616171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.888 [2024-04-23 16:32:09.710413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.458 16:32:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:11.458 16:32:10 -- common/autotest_common.sh@852 -- # return 0 00:33:11.458 16:32:10 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:11.458 16:32:10 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:11.458 16:32:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:11.458 [2024-04-23 16:32:10.330922] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:11.458 16:32:10 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:11.458 16:32:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:16.729 16:32:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.729 16:32:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.987 nvme0n1 00:33:16.987 16:32:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:16.987 16:32:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.987 Running I/O for 2 seconds... 00:33:19.518 00:33:19.518 Latency(us) 00:33:19.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.518 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.518 nvme0n1 : 2.00 27833.57 108.72 0.00 0.00 4590.93 2095.43 13797.05 00:33:19.518 =================================================================================================================== 00:33:19.518 Total : 27833.57 108.72 0.00 0.00 4590.93 2095.43 13797.05 00:33:19.518 0 00:33:19.518 16:32:17 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:19.518 16:32:17 -- host/digest.sh@92 -- # get_accel_stats 00:33:19.518 16:32:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:19.518 16:32:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:19.518 | select(.opcode=="crc32c") 00:33:19.518 | "\(.module_name) \(.executed)"' 00:33:19.518 16:32:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:19.518 16:32:18 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:19.518 16:32:18 -- host/digest.sh@93 -- # exp_module=dsa 00:33:19.518 16:32:18 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:19.518 16:32:18 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:19.518 16:32:18 -- host/digest.sh@97 -- # killprocess 3312251 00:33:19.518 16:32:18 -- common/autotest_common.sh@926 -- # '[' -z 3312251 ']' 00:33:19.518 16:32:18 -- common/autotest_common.sh@930 -- # kill -0 3312251 00:33:19.518 16:32:18 -- common/autotest_common.sh@931 -- # uname 00:33:19.518 16:32:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:19.518 16:32:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3312251 00:33:19.518 16:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:19.518 16:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:19.518 16:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3312251' 00:33:19.518 killing process with pid 3312251 00:33:19.518 16:32:18 -- common/autotest_common.sh@945 -- # kill 3312251 00:33:19.518 Received shutdown signal, test time was about 2.000000 seconds 00:33:19.518 00:33:19.518 Latency(us) 00:33:19.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.518 =================================================================================================================== 00:33:19.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.518 16:32:18 -- common/autotest_common.sh@950 -- # wait 3312251 00:33:20.897 16:32:19 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:33:20.897 16:32:19 -- host/digest.sh@77 -- # local rw bs qd 00:33:20.897 16:32:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.897 16:32:19 -- host/digest.sh@80 -- # rw=randwrite 00:33:20.897 16:32:19 -- host/digest.sh@80 -- # bs=131072 00:33:20.897 16:32:19 -- host/digest.sh@80 -- # qd=16 00:33:20.897 16:32:19 -- host/digest.sh@82 -- # bperfpid=3314338 00:33:20.897 16:32:19 -- host/digest.sh@83 -- # waitforlisten 3314338 /var/tmp/bperf.sock 00:33:20.897 16:32:19 -- common/autotest_common.sh@819 -- # '[' -z 3314338 ']' 00:33:20.897 16:32:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.897 16:32:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:20.897 16:32:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.897 16:32:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:20.897 16:32:19 -- common/autotest_common.sh@10 -- # set +x 00:33:20.897 16:32:19 -- host/digest.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:20.897 [2024-04-23 16:32:19.574318] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:20.897 [2024-04-23 16:32:19.574466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314338 ] 00:33:20.897 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.897 Zero copy mechanism will not be used. 00:33:20.897 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.897 [2024-04-23 16:32:19.704947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.897 [2024-04-23 16:32:19.794143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.468 16:32:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:21.468 16:32:20 -- common/autotest_common.sh@852 -- # return 0 00:33:21.468 16:32:20 -- host/digest.sh@85 -- # [[ 1 -eq 1 ]] 00:33:21.468 16:32:20 -- host/digest.sh@85 -- # bperf_rpc dsa_scan_accel_module 00:33:21.468 16:32:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:33:21.727 [2024-04-23 16:32:20.426746] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:33:21.727 16:32:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:33:21.727 16:32:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:27.015 16:32:25 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.015 16:32:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.015 nvme0n1 00:33:27.015 16:32:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:33:27.015 16:32:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:27.015 Zero copy mechanism will not be used. 00:33:27.015 Running I/O for 2 seconds... 00:33:29.551 00:33:29.551 Latency(us) 00:33:29.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:29.551 nvme0n1 : 2.00 2249.68 281.21 0.00 0.00 7101.16 4294.33 17936.17 00:33:29.551 =================================================================================================================== 00:33:29.551 Total : 2249.68 281.21 0.00 0.00 7101.16 4294.33 17936.17 00:33:29.551 0 00:33:29.551 16:32:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:33:29.551 16:32:27 -- host/digest.sh@92 -- # get_accel_stats 00:33:29.551 16:32:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:29.551 16:32:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:29.551 | select(.opcode=="crc32c") 00:33:29.551 | "\(.module_name) \(.executed)"' 00:33:29.551 16:32:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:29.551 16:32:28 -- host/digest.sh@93 -- # [[ 1 -eq 1 ]] 00:33:29.551 16:32:28 -- host/digest.sh@93 -- # exp_module=dsa 00:33:29.551 16:32:28 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:33:29.551 16:32:28 -- host/digest.sh@95 -- # [[ dsa == \d\s\a ]] 00:33:29.551 16:32:28 -- host/digest.sh@97 -- # killprocess 3314338 00:33:29.551 16:32:28 -- common/autotest_common.sh@926 -- # '[' -z 3314338 ']' 00:33:29.551 16:32:28 -- common/autotest_common.sh@930 -- # kill -0 3314338 00:33:29.551 16:32:28 -- common/autotest_common.sh@931 -- # uname 00:33:29.551 16:32:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:29.551 16:32:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3314338 00:33:29.551 16:32:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:29.551 16:32:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:29.551 16:32:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3314338' 00:33:29.551 killing process with pid 3314338 00:33:29.551 16:32:28 -- common/autotest_common.sh@945 -- # kill 3314338 00:33:29.551 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.551 00:33:29.551 Latency(us) 00:33:29.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.551 =================================================================================================================== 00:33:29.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.551 16:32:28 -- common/autotest_common.sh@950 -- # wait 3314338 00:33:30.935 16:32:29 -- host/digest.sh@126 -- # killprocess 3308217 00:33:30.935 16:32:29 -- common/autotest_common.sh@926 -- # '[' -z 3308217 ']' 00:33:30.935 16:32:29 -- common/autotest_common.sh@930 -- # kill -0 3308217 00:33:30.935 16:32:29 -- common/autotest_common.sh@931 -- # uname 00:33:30.935 16:32:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:30.935 16:32:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3308217 00:33:30.935 16:32:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:30.935 16:32:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:30.935 16:32:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3308217' 00:33:30.935 killing process with pid 3308217 00:33:30.935 16:32:29 -- common/autotest_common.sh@945 -- # kill 3308217 00:33:30.935 16:32:29 -- common/autotest_common.sh@950 -- # wait 3308217 00:33:31.194 00:33:31.194 real 0m41.910s 00:33:31.194 user 1m2.116s 00:33:31.194 sys 0m3.597s 00:33:31.194 16:32:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.194 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:31.194 ************************************ 00:33:31.194 END TEST nvmf_digest_clean 00:33:31.194 ************************************ 00:33:31.194 16:32:30 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:33:31.194 16:32:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:31.194 16:32:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.194 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:31.194 ************************************ 00:33:31.194 START TEST nvmf_digest_error 00:33:31.194 ************************************ 00:33:31.194 16:32:30 -- common/autotest_common.sh@1104 -- # run_digest_error 00:33:31.194 16:32:30 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:33:31.194 16:32:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:31.194 16:32:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:31.194 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:31.194 16:32:30 -- nvmf/common.sh@469 -- # nvmfpid=3316350 00:33:31.194 16:32:30 -- nvmf/common.sh@470 -- # waitforlisten 3316350 00:33:31.194 16:32:30 -- common/autotest_common.sh@819 -- # '[' -z 3316350 ']' 00:33:31.194 16:32:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.194 16:32:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:31.194 16:32:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.194 16:32:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:31.194 16:32:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:31.194 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:31.452 [2024-04-23 16:32:30.194015] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:31.452 [2024-04-23 16:32:30.194125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.452 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.452 [2024-04-23 16:32:30.315607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.711 [2024-04-23 16:32:30.412247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:31.711 [2024-04-23 16:32:30.412414] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.711 [2024-04-23 16:32:30.412427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.711 [2024-04-23 16:32:30.412436] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.711 [2024-04-23 16:32:30.412463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.971 16:32:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:31.971 16:32:30 -- common/autotest_common.sh@852 -- # return 0 00:33:31.971 16:32:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:31.971 16:32:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:31.971 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:32.232 16:32:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.232 16:32:30 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:32.232 16:32:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.232 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:32.232 [2024-04-23 16:32:30.924949] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:32.232 16:32:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.232 16:32:30 -- host/digest.sh@104 -- # common_target_config 00:33:32.232 16:32:30 -- host/digest.sh@43 -- # rpc_cmd 00:33:32.232 16:32:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.232 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:33:32.232 null0 00:33:32.232 [2024-04-23 16:32:31.099002] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.232 [2024-04-23 16:32:31.123182] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.232 16:32:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.232 16:32:31 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:33:32.232 16:32:31 -- host/digest.sh@54 -- # local rw bs qd 00:33:32.232 16:32:31 -- host/digest.sh@56 -- # rw=randread 00:33:32.232 16:32:31 -- host/digest.sh@56 -- # bs=4096 00:33:32.232 16:32:31 -- host/digest.sh@56 -- # qd=128 00:33:32.232 16:32:31 -- host/digest.sh@58 -- # bperfpid=3316490 00:33:32.232 16:32:31 -- host/digest.sh@60 -- # waitforlisten 3316490 /var/tmp/bperf.sock 00:33:32.232 16:32:31 -- common/autotest_common.sh@819 -- # '[' -z 3316490 ']' 00:33:32.232 16:32:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.232 16:32:31 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:32.232 16:32:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:32.232 16:32:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.232 16:32:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:32.232 16:32:31 -- common/autotest_common.sh@10 -- # set +x 00:33:32.493 [2024-04-23 16:32:31.211504] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:32.493 [2024-04-23 16:32:31.211647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316490 ] 00:33:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.493 [2024-04-23 16:32:31.344176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.754 [2024-04-23 16:32:31.443153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.012 16:32:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:33.012 16:32:31 -- common/autotest_common.sh@852 -- # return 0 00:33:33.012 16:32:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.012 16:32:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:33.271 16:32:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:33.271 16:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:33.271 16:32:32 -- common/autotest_common.sh@10 -- # set +x 00:33:33.271 16:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:33.271 16:32:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.271 16:32:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.530 nvme0n1 00:33:33.530 16:32:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:33.530 16:32:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:33.530 16:32:32 -- common/autotest_common.sh@10 -- # set +x 00:33:33.530 16:32:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:33.530 16:32:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.530 16:32:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.530 Running I/O for 2 seconds... 00:33:33.530 [2024-04-23 16:32:32.323650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.323695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.323710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.335427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.335457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.335469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.347068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.347095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.347106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.359892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.359917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.359928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.368096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.368122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.368133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.381159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.381184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.381194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.393828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.393852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.393863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.406645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.406670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.414342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.414366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.414375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.425436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.425461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.425472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.437371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.437400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.437409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.448319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.448343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.448353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.530 [2024-04-23 16:32:32.455594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.530 [2024-04-23 16:32:32.455618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.530 [2024-04-23 16:32:32.455631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.464844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.464870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.464881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.473632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.473658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.473680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.482034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.482059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.482069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.490535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.490564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.490574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.499638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.499663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.499673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.508001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.508027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.508038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.516613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.516642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.516652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.527076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.527101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.527110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.535830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.535854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.535864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.547874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.547898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.547908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.559911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.559938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.559948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.571726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.571750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.571763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.583867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.583890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.583900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.595767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.595792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.595801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.607611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.607638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.607648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.619416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.619439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.619449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.630994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.631017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.631026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.643315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.790 [2024-04-23 16:32:32.643339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.790 [2024-04-23 16:32:32.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.790 [2024-04-23 16:32:32.654871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.654894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.654904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.791 [2024-04-23 16:32:32.671294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.671318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.671327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.791 [2024-04-23 16:32:32.683241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.683269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.683279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.791 [2024-04-23 16:32:32.695317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.695343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.695353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.791 [2024-04-23 16:32:32.707365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.707388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.707398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.791 [2024-04-23 16:32:32.719120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:33.791 [2024-04-23 16:32:32.719144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.791 [2024-04-23 16:32:32.719154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.730986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.731010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.051 [2024-04-23 16:32:32.731020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.743071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.743094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.051 [2024-04-23 16:32:32.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.755139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.755164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.051 [2024-04-23 16:32:32.755174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.766819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.766842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.051 [2024-04-23 16:32:32.766852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.778649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.051 [2024-04-23 16:32:32.778686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.051 [2024-04-23 16:32:32.790851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.051 [2024-04-23 16:32:32.790875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.790886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.802661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.802684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.802694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.814801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.814825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.814834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.826881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.826904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.826913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.839056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.839079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.839088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.851092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.851116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.851125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.862876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.862900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.875121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.875145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.875154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.887396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.887423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.887432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.899247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.899272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.911387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.911411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.911420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.923388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.923412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.923421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.935578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.935602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.935612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.947556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.947580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.947589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.959431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.959454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.959464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.052 [2024-04-23 16:32:32.971677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.052 [2024-04-23 16:32:32.971710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.052 [2024-04-23 16:32:32.971720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.314 [2024-04-23 16:32:32.983834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.314 [2024-04-23 16:32:32.983859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:32.983868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:32.997218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:32.997246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:32.997259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.010258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.010283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.010293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.022046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.022070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.022079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.034226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.034253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.034263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.046063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.046088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.046098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.058255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.070175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.070200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.070209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.082421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.082445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.082454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.094399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.094431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.094441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.106686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.106710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.106720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.118479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.118505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.118515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.130531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.130556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.130566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.142733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.142758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.142768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.154814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.154840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.154849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.166602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.166633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.166643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.178722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.178749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.178758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.190704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.190736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.190746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.202934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.202960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.215015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.215040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.215051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.226859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.226885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.226894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.315 [2024-04-23 16:32:33.238945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.315 [2024-04-23 16:32:33.238971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.315 [2024-04-23 16:32:33.238982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.250947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.262845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.262871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.262880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.274915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.274940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.287118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.287153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.299043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.299074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.299084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.311744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.311770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.311780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.323589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.323616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.323625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.334901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.334928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.334938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.346865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.346891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.346901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.359136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.359161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.359171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.371027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.371051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.371060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.383691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.383727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.395069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.395096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.395105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.407580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.407615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.419670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.419695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.419705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.431596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.431621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.431636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.443621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.443652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.443662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.455309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.455336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.467364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.467389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.467399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.577 [2024-04-23 16:32:33.479278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.577 [2024-04-23 16:32:33.479304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.577 [2024-04-23 16:32:33.479314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.578 [2024-04-23 16:32:33.491278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.578 [2024-04-23 16:32:33.491304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.578 [2024-04-23 16:32:33.491315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.578 [2024-04-23 16:32:33.502990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.578 [2024-04-23 16:32:33.503021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.578 [2024-04-23 16:32:33.503031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.839 [2024-04-23 16:32:33.514827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.839 [2024-04-23 16:32:33.514854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.839 [2024-04-23 16:32:33.514863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.839 [2024-04-23 16:32:33.526649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.839 [2024-04-23 16:32:33.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.839 [2024-04-23 16:32:33.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.839 [2024-04-23 16:32:33.538502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.839 [2024-04-23 16:32:33.538527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.839 [2024-04-23 16:32:33.538536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.839 [2024-04-23 16:32:33.555262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.839 [2024-04-23 16:32:33.555287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.555297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.567323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.567350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.567360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.579380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.579405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.579415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.591525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.591549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.591559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.603337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.603361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.603370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.615087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.615111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.615120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.627102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.627126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.627136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.639066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.639091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.639102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.650862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.650886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.650896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.662666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.674750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.674773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.674783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.686537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.686561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.686570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.698556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.698581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.698590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.710598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.710630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.710649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.722426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.722453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.722463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.734255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.734280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.734289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.746173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.746197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.746207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.758109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.758144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.840 [2024-04-23 16:32:33.770083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:34.840 [2024-04-23 16:32:33.770107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.840 [2024-04-23 16:32:33.770117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.782050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.782075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.782085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.794074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.794099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.805865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.805890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.805900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.817693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.817718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.817728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.829502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.829526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.829536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.841705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.841731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.841741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.853498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.853522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.853531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.865679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.865703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.865713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.877453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.877477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.877486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.889497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.889522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.889531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.901588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.901611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.901621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.913385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.913410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.913423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.925311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.925337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.925347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.937362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.937387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.937396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.949352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.949377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.949388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.961205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.961229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.961239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.972951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.972977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.972987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.985152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.985178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.985188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:33.996933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:33.996959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:33.996978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:34.009132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:34.009160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:34.009170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:34.021035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:34.021062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:34.021072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.102 [2024-04-23 16:32:34.033396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.102 [2024-04-23 16:32:34.033420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.102 [2024-04-23 16:32:34.033430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.045508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.045535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.045544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.057857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.057882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.057892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.069714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.069738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.069747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.082147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.082171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.082181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.093582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.093615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.105535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.105558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.105568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.117898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.117922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.117935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.129527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.129551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.141229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.141252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.141262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.153408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.153444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.361 [2024-04-23 16:32:34.165468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.361 [2024-04-23 16:32:34.165492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.361 [2024-04-23 16:32:34.165501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.177427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.177459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.189336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.189360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.189369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.201968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.201992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.202002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.213441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.213464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.213473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.226114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.226138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.226147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.237793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.237826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.249585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.249608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.249618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.260921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.260946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.260955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.272911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.272943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.362 [2024-04-23 16:32:34.284827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.362 [2024-04-23 16:32:34.284851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.362 [2024-04-23 16:32:34.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.620 [2024-04-23 16:32:34.297117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.620 [2024-04-23 16:32:34.297141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.620 [2024-04-23 16:32:34.297151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.620 [2024-04-23 16:32:34.309776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:35.620 [2024-04-23 16:32:34.309798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.620 [2024-04-23 16:32:34.309808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.620 00:33:35.620 Latency(us) 00:33:35.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.620 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:35.620 nvme0n1 : 2.05 21119.52 82.50 0.00 0.00 5936.21 1750.50 52980.68 00:33:35.620 =================================================================================================================== 00:33:35.620 Total : 21119.52 82.50 0.00 0.00 5936.21 1750.50 52980.68 00:33:35.620 0 00:33:35.620 16:32:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.620 16:32:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.620 16:32:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.620 | .driver_specific 00:33:35.620 | .nvme_error 00:33:35.620 | .status_code 00:33:35.620 | .command_transient_transport_error' 00:33:35.620 16:32:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.620 16:32:34 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:33:35.620 16:32:34 -- host/digest.sh@73 -- # killprocess 3316490 00:33:35.620 16:32:34 -- common/autotest_common.sh@926 -- # '[' -z 3316490 ']' 00:33:35.620 16:32:34 -- common/autotest_common.sh@930 -- # kill -0 3316490 00:33:35.620 16:32:34 -- common/autotest_common.sh@931 -- # uname 00:33:35.620 16:32:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:35.620 16:32:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316490 00:33:35.620 16:32:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:35.620 16:32:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:35.620 16:32:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316490' 00:33:35.620 killing process with pid 3316490 00:33:35.620 16:32:34 -- common/autotest_common.sh@945 -- # kill 3316490 00:33:35.620 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.620 00:33:35.620 Latency(us) 00:33:35.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.620 =================================================================================================================== 00:33:35.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.620 16:32:34 -- common/autotest_common.sh@950 -- # wait 3316490 00:33:36.186 16:32:34 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:33:36.186 16:32:34 -- host/digest.sh@54 -- # local rw bs qd 00:33:36.186 16:32:34 -- host/digest.sh@56 -- # rw=randread 00:33:36.186 16:32:34 -- host/digest.sh@56 -- # bs=131072 00:33:36.186 16:32:34 -- host/digest.sh@56 -- # qd=16 00:33:36.186 16:32:34 -- host/digest.sh@58 -- # bperfpid=3317329 00:33:36.186 16:32:34 -- host/digest.sh@60 -- # waitforlisten 3317329 /var/tmp/bperf.sock 00:33:36.186 16:32:34 -- common/autotest_common.sh@819 -- # '[' -z 3317329 ']' 00:33:36.186 16:32:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.186 16:32:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:36.186 16:32:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.186 16:32:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:36.186 16:32:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:36.186 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:33:36.186 [2024-04-23 16:32:34.969151] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:36.186 [2024-04-23 16:32:34.969266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3317329 ] 00:33:36.186 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.186 Zero copy mechanism will not be used. 00:33:36.186 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.186 [2024-04-23 16:32:35.080515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.446 [2024-04-23 16:32:35.175466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.018 16:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:37.018 16:32:35 -- common/autotest_common.sh@852 -- # return 0 00:33:37.018 16:32:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.018 16:32:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.018 16:32:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:37.018 16:32:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.018 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:33:37.018 16:32:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.018 16:32:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.018 16:32:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.279 nvme0n1 00:33:37.279 16:32:36 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:37.279 16:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.279 16:32:36 -- common/autotest_common.sh@10 -- # set +x 00:33:37.279 16:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.279 16:32:36 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:37.279 16:32:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.280 Zero copy mechanism will not be used. 00:33:37.280 Running I/O for 2 seconds... 00:33:37.280 [2024-04-23 16:32:36.176748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.280 [2024-04-23 16:32:36.176796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.280 [2024-04-23 16:32:36.176812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.280 [2024-04-23 16:32:36.184172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.280 [2024-04-23 16:32:36.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.280 [2024-04-23 16:32:36.184219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.280 [2024-04-23 16:32:36.191322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.280 [2024-04-23 16:32:36.191349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.280 [2024-04-23 16:32:36.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.280 [2024-04-23 16:32:36.198472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.280 [2024-04-23 16:32:36.198500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.280 [2024-04-23 16:32:36.198511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.280 [2024-04-23 16:32:36.205566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.280 [2024-04-23 16:32:36.205592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.280 [2024-04-23 16:32:36.205602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.212692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.212724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.212735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.219770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.219798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.219809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.226828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.226865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.233979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.234006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.234017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.241084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.241111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.241121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.539 [2024-04-23 16:32:36.248142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.539 [2024-04-23 16:32:36.248166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.539 [2024-04-23 16:32:36.248176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.255227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.255251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.255260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.262238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.262263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.262273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.269339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.269364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.269381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.276335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.276380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.283447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.283472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.283482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.290462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.290488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.290499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.297541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.297567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.297579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.304543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.304568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.304579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.311637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.311661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.311672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.318648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.318672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.318683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.325742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.325768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.325778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.332742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.332770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.332781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.339818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.339842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.339854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.346813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.346837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.346849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.353886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.353910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.353922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.360881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.360906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.360916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.367955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.367979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.367990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.374983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.375006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.375017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.382040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.382064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.382075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.389027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.389050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.389064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.396077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.396112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.403061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.403086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.403096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.410707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.410749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.420362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.420389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.420400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.428897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.428932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.540 [2024-04-23 16:32:36.437669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.540 [2024-04-23 16:32:36.437694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.540 [2024-04-23 16:32:36.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.541 [2024-04-23 16:32:36.447159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.541 [2024-04-23 16:32:36.447184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.541 [2024-04-23 16:32:36.447196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.541 [2024-04-23 16:32:36.455973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.541 [2024-04-23 16:32:36.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.541 [2024-04-23 16:32:36.456006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.541 [2024-04-23 16:32:36.465253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.541 [2024-04-23 16:32:36.465286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.541 [2024-04-23 16:32:36.465296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.474663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.474690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-04-23 16:32:36.474700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.483570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.483594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-04-23 16:32:36.483604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.492425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.492449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-04-23 16:32:36.492459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.501644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.501669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-04-23 16:32:36.501680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.510866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.510890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.800 [2024-04-23 16:32:36.510900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.800 [2024-04-23 16:32:36.520349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.800 [2024-04-23 16:32:36.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.520384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.528796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.528821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.528830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.536710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.536751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.544505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.544530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.544539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.551618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.551648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.551659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.558645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.558667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.558678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.565749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.565772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.565783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.572768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.572802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.579869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.579891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.579902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.586867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.586890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.586901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.593964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.593987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.593997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.600989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.601016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.601027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.608074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.608104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.608114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.615122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.615145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.615155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.622196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.622219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.622229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.629214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.629238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.629248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.637818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.637842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.637852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.647057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.647082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.647093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.656268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.656293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.656303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.665271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.665297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.665314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.673988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.674015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.674026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.683166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.683190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.683201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.691820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.691844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.691854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.700983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.801 [2024-04-23 16:32:36.701007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.801 [2024-04-23 16:32:36.701017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.801 [2024-04-23 16:32:36.710175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.802 [2024-04-23 16:32:36.710205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-04-23 16:32:36.710218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.802 [2024-04-23 16:32:36.719512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.802 [2024-04-23 16:32:36.719536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-04-23 16:32:36.719546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.802 [2024-04-23 16:32:36.728497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:37.802 [2024-04-23 16:32:36.728524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.802 [2024-04-23 16:32:36.728534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.736949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.736974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.736984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.746082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.746111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.746122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.755245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.755269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.755279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.764107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.764132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.764142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.773199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.782266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.782290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.782300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.790595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.790621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.790635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.799316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.799341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.799351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.808134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.808158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.808169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.816693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.816717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.816732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.826404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.826430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.826439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.834588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.834612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.834622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.842243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.842277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.850551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.850575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.062 [2024-04-23 16:32:36.850585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.062 [2024-04-23 16:32:36.857680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.062 [2024-04-23 16:32:36.857703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.857712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.864654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.864677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.864687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.871702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.871734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.878711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.878735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.878745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.885719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.885746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.892735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.892758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.892768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.899746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.899769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.899780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.906846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.906869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.906879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.913853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.913875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.913886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.920868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.920891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.920901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.927881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.927913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.934914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.934936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.934946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.941962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.941983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.941997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.948970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.948993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.949003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.955978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.956004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.956015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.962958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.962992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.969976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.969998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.970009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.976896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.976918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.976929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.983866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.983889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.983899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.063 [2024-04-23 16:32:36.990881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.063 [2024-04-23 16:32:36.990904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.063 [2024-04-23 16:32:36.990915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:36.997891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:36.997915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:36.997925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.005000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.005028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.005038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.012003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.012035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.019053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.019076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.019086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.026138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.026162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.026172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.033127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.033153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.033163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.040039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.040061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.040072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.323 [2024-04-23 16:32:37.047085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.323 [2024-04-23 16:32:37.047109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.323 [2024-04-23 16:32:37.047120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.054136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.054159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.061151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.061175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.061192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.068098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.068121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.068130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.075225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.075257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.082409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.082432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.082443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.089433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.089455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.089465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.096446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.096468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.096478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.103458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.103480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.103490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.110466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.110489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.110499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.117460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.117483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.117494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.124470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.124496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.124506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.131518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.131551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.138516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.138538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.138548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.145517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.145540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.152527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.152560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.159557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.159579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.159590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.166542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.166565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.166575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.173550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.173574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.173584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.180516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.180538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.180549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.187473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.187496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.187506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.194478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.194500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.194510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.201486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.201508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.201518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.208479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.208501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.324 [2024-04-23 16:32:37.208511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.324 [2024-04-23 16:32:37.215489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.324 [2024-04-23 16:32:37.215511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.215521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.325 [2024-04-23 16:32:37.222456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.325 [2024-04-23 16:32:37.222479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.222489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.325 [2024-04-23 16:32:37.229500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.325 [2024-04-23 16:32:37.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.229533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.325 [2024-04-23 16:32:37.236467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.325 [2024-04-23 16:32:37.236490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.236499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.325 [2024-04-23 16:32:37.243514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.325 [2024-04-23 16:32:37.243541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.243551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.325 [2024-04-23 16:32:37.250488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.325 [2024-04-23 16:32:37.250510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.325 [2024-04-23 16:32:37.250528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.257540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.257564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.257574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.264545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.264568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.264579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.271505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.271537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.278533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.278556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.278567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.285541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.285573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.292455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.292477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.292487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.299419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.299441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.299451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.306433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.306456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.306466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.313450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.313472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.313482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.320417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.320439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.320450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.327435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.327458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.327468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.334403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.334426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.341322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.341345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.341355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.348278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.348301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.348311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.355330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.355353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.355365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.362258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.362284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.362295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.369313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.369336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.369346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.376277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.376311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.383365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.383387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.383397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.390369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.390402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.397443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.397466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.397476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.404391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.404413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.404423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.411441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.411464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.587 [2024-04-23 16:32:37.411474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.587 [2024-04-23 16:32:37.418402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.587 [2024-04-23 16:32:37.418425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.418436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.425457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.425480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.425490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.432422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.432445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.439472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.439494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.439504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.446456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.446479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.446489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.453513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.453536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.453546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.460472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.460495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.460504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.467699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.467721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.467731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.474655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.474677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.474686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.481640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.481665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.481675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.488671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.488693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.488702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.495700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.495721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.495730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.502730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.502751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.502761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.509658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.509680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.509689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.588 [2024-04-23 16:32:37.516703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.588 [2024-04-23 16:32:37.516725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.588 [2024-04-23 16:32:37.516734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.523738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.523762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.523771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.530778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.530801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.530810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.537796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.537818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.537827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.544832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.544854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.544863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.551887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.551934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.551943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.558949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.558972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.558982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.565943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.565965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.565975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.572981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.573004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.573013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.579994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.580016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.580025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.587003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.587025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.587034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.594186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.594208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.594217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.601189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.601216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.601225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.608193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.608216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.608225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.615206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.615228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.615237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.622200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.622221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.622230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.629109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.629131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.629140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.636160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.636182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.636191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.643137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.643161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.643170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.651977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.652009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.652024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.660104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.660129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.667055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.667079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.667089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.674137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.674162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.674172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.681138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.681162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.681172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.688783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.688811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.688821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.697480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.697506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.697516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.706058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.853 [2024-04-23 16:32:37.706094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.853 [2024-04-23 16:32:37.706104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.853 [2024-04-23 16:32:37.714408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.714438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.714448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.722123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.722152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.722163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.730613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.730651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.730661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.739121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.739148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.739158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.748143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.748169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.748179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.757115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.757142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.757152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.766202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.766228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.766238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.854 [2024-04-23 16:32:37.775251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:38.854 [2024-04-23 16:32:37.775277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.854 [2024-04-23 16:32:37.775286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.197 [2024-04-23 16:32:37.784024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.197 [2024-04-23 16:32:37.784073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.784090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.797658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.797703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.797717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.809065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.809115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.809134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.817273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.817305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.817318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.823859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.823888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.823901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.830273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.830299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.830310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.836799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.836839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.843238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.843262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.843272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.849751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.849776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.849786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.856163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.856198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.862615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.862651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.869080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.869110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.869120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.875442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.875476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.881866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.881891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.881901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.888347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.888372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.894846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.894879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.901297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.901321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.901331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.907764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.907790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.907801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.914226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.914250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.914259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.920681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.920706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.920715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.927128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.927152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.927162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.933587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.933611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.933620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.940038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.940062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.940071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.946476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.946500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.946509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.952930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.952955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.952965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.959380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.959405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.959414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.965846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.198 [2024-04-23 16:32:37.965870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.198 [2024-04-23 16:32:37.965880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.198 [2024-04-23 16:32:37.972283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:37.972309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:37.972319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:37.978744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:37.978776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:37.978786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:37.985186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:37.985212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:37.985221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:37.991646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:37.991671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:37.991681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:37.998087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:37.998112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:37.998121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.004494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.004519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.004529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.010934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.010959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.010969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.017377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.017402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.017411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.023764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.023790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.023799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.030232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.030257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.030267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.036709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.036733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.036743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.043137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.043162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.043171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.049575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.049600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.049610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.056016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.056041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.056050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.062492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.068904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.068929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.068938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.199 [2024-04-23 16:32:38.075410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.199 [2024-04-23 16:32:38.075436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.199 [2024-04-23 16:32:38.075445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.082087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.082113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.088512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.088550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.088560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.094969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.094993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.095002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.101383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.101406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.101416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.107815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.107839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.114394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.114418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.114428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.120817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.120839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.127287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.127312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.127322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.133708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.133732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.133741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.140114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.140139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.140148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.146584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.146619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.153047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.153091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.463 [2024-04-23 16:32:38.159487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x613000003d80) 00:33:39.463 [2024-04-23 16:32:38.159512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.463 [2024-04-23 16:32:38.159522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.463 00:33:39.463 Latency(us) 00:33:39.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.463 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:39.463 nvme0n1 : 2.00 4220.00 527.50 0.00 0.00 3788.91 3104.34 12279.38 00:33:39.463 =================================================================================================================== 00:33:39.463 Total : 4220.00 527.50 0.00 0.00 3788.91 3104.34 12279.38 00:33:39.463 0 00:33:39.463 16:32:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:39.463 16:32:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:39.463 16:32:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.463 16:32:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:39.463 | .driver_specific 00:33:39.463 | .nvme_error 00:33:39.463 | .status_code 00:33:39.463 | .command_transient_transport_error' 00:33:39.463 16:32:38 -- host/digest.sh@71 -- # (( 272 > 0 )) 00:33:39.463 16:32:38 -- host/digest.sh@73 -- # killprocess 3317329 00:33:39.463 16:32:38 -- common/autotest_common.sh@926 -- # '[' -z 3317329 ']' 00:33:39.463 16:32:38 -- common/autotest_common.sh@930 -- # kill -0 3317329 00:33:39.463 16:32:38 -- common/autotest_common.sh@931 -- # uname 00:33:39.463 16:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:39.463 16:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3317329 00:33:39.463 16:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:39.463 16:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:39.463 16:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3317329' 00:33:39.463 killing process with pid 3317329 00:33:39.463 16:32:38 -- common/autotest_common.sh@945 -- # kill 3317329 00:33:39.463 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.463 00:33:39.463 Latency(us) 00:33:39.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.463 =================================================================================================================== 00:33:39.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.463 16:32:38 -- common/autotest_common.sh@950 -- # wait 3317329 00:33:40.030 16:32:38 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:33:40.030 16:32:38 -- host/digest.sh@54 -- # local rw bs qd 00:33:40.030 16:32:38 -- host/digest.sh@56 -- # rw=randwrite 00:33:40.030 16:32:38 -- host/digest.sh@56 -- # bs=4096 00:33:40.030 16:32:38 -- host/digest.sh@56 -- # qd=128 00:33:40.030 16:32:38 -- host/digest.sh@58 -- # bperfpid=3318011 00:33:40.030 16:32:38 -- host/digest.sh@60 -- # waitforlisten 3318011 /var/tmp/bperf.sock 00:33:40.030 16:32:38 -- common/autotest_common.sh@819 -- # '[' -z 3318011 ']' 00:33:40.030 16:32:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.030 16:32:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:40.030 16:32:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.030 16:32:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:40.030 16:32:38 -- common/autotest_common.sh@10 -- # set +x 00:33:40.030 16:32:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:40.030 [2024-04-23 16:32:38.806536] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:40.030 [2024-04-23 16:32:38.806654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318011 ] 00:33:40.030 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.030 [2024-04-23 16:32:38.896319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.288 [2024-04-23 16:32:38.984482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.857 16:32:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:40.857 16:32:39 -- common/autotest_common.sh@852 -- # return 0 00:33:40.857 16:32:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.857 16:32:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.857 16:32:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:40.857 16:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:40.857 16:32:39 -- common/autotest_common.sh@10 -- # set +x 00:33:40.857 16:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:40.857 16:32:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.857 16:32:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.118 nvme0n1 00:33:41.118 16:32:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:41.118 16:32:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.118 16:32:39 -- common/autotest_common.sh@10 -- # set +x 00:33:41.118 16:32:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.118 16:32:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:41.118 16:32:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.380 Running I/O for 2 seconds... 00:33:41.380 [2024-04-23 16:32:40.075310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:33:41.380 [2024-04-23 16:32:40.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.076277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.084696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:33:41.380 [2024-04-23 16:32:40.085687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.085722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.093869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.380 [2024-04-23 16:32:40.094601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.102865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.380 [2024-04-23 16:32:40.103766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.103790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.111758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.380 [2024-04-23 16:32:40.112946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.112983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.121907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.380 [2024-04-23 16:32:40.122917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.122949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:41.380 [2024-04-23 16:32:40.131126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.380 [2024-04-23 16:32:40.132042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.380 [2024-04-23 16:32:40.132067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.140025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.140944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.140970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.148895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.149817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.149840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.157732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.158664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.158688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.166546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.167488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.167516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.175379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.176332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.176355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.184211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.185171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.185195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.193176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.194144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.194166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.202006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.202984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.210845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.381 [2024-04-23 16:32:40.211829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.219670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:33:41.381 [2024-04-23 16:32:40.220670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.220693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.228488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e84c0 00:33:41.381 [2024-04-23 16:32:40.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.229511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.236288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:33:41.381 [2024-04-23 16:32:40.236998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.237020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.245110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.245834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.245855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.253931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.254662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.254684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.262753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.263499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.263520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.271572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.272324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.272346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.280383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.281146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.281168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.289203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.289968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.289989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.298026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.298804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.298826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:41.381 [2024-04-23 16:32:40.306839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.381 [2024-04-23 16:32:40.307626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.381 [2024-04-23 16:32:40.307652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.643 [2024-04-23 16:32:40.315654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.643 [2024-04-23 16:32:40.316447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.643 [2024-04-23 16:32:40.316474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.643 [2024-04-23 16:32:40.324488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.643 [2024-04-23 16:32:40.325295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.643 [2024-04-23 16:32:40.325318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.643 [2024-04-23 16:32:40.333319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.643 [2024-04-23 16:32:40.334137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.643 [2024-04-23 16:32:40.334160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.643 [2024-04-23 16:32:40.342143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.643 [2024-04-23 16:32:40.342967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.342989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.350979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.351834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.359795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.360637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.360659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.368636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.369485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.369507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.377464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.378323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.378345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.386297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.387165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.387187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.395146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:33:41.644 [2024-04-23 16:32:40.396028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.396050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.404019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:33:41.644 [2024-04-23 16:32:40.404634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.404656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.412852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:33:41.644 [2024-04-23 16:32:40.413430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.413452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.421695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:33:41.644 [2024-04-23 16:32:40.422208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.422230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.430509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0ff8 00:33:41.644 [2024-04-23 16:32:40.431075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.431098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.439339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:33:41.644 [2024-04-23 16:32:40.440006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.440028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.448167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:33:41.644 [2024-04-23 16:32:40.449197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.449223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.456952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:33:41.644 [2024-04-23 16:32:40.457825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.457848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.465781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:33:41.644 [2024-04-23 16:32:40.466659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.466681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.474646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:33:41.644 [2024-04-23 16:32:40.475194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.475218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.483456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:33:41.644 [2024-04-23 16:32:40.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.484009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.492304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:41.644 [2024-04-23 16:32:40.493081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.493104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.501135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:41.644 [2024-04-23 16:32:40.502160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.502184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.509926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:41.644 [2024-04-23 16:32:40.510797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.510818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.518766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:41.644 [2024-04-23 16:32:40.519642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.519663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.527636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:33:41.644 [2024-04-23 16:32:40.528184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.528206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.536446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:33:41.644 [2024-04-23 16:32:40.537026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.537048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.545272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:33:41.644 [2024-04-23 16:32:40.545822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.545848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.554119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:33:41.644 [2024-04-23 16:32:40.554882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.554904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.562920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:33:41.644 [2024-04-23 16:32:40.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.563782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.644 [2024-04-23 16:32:40.571760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:33:41.644 [2024-04-23 16:32:40.572595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.644 [2024-04-23 16:32:40.572618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.905 [2024-04-23 16:32:40.581602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:33:41.905 [2024-04-23 16:32:40.582498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.905 [2024-04-23 16:32:40.582520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.905 [2024-04-23 16:32:40.590987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:33:41.905 [2024-04-23 16:32:40.591560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.905 [2024-04-23 16:32:40.591582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.905 [2024-04-23 16:32:40.598754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:33:41.905 [2024-04-23 16:32:40.599829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.905 [2024-04-23 16:32:40.599851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.905 [2024-04-23 16:32:40.608654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:33:41.905 [2024-04-23 16:32:40.609232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.905 [2024-04-23 16:32:40.609255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.616399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:33:41.906 [2024-04-23 16:32:40.617355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.617376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.625269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:33:41.906 [2024-04-23 16:32:40.626100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.626122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.634068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:33:41.906 [2024-04-23 16:32:40.634952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.634974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.642864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:33:41.906 [2024-04-23 16:32:40.643810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.643832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.651009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:33:41.906 [2024-04-23 16:32:40.651744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.651766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.659953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e84c0 00:33:41.906 [2024-04-23 16:32:40.660064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.660085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.668866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:33:41.906 [2024-04-23 16:32:40.669119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.669142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.677656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:41.906 [2024-04-23 16:32:40.677891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.677912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.687962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:33:41.906 [2024-04-23 16:32:40.689196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.689222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.696800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:33:41.906 [2024-04-23 16:32:40.698044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.698073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.706148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:33:41.906 [2024-04-23 16:32:40.707460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.707486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.713852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:33:41.906 [2024-04-23 16:32:40.714617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.714647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.722969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:33:41.906 [2024-04-23 16:32:40.723876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.723899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.731834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:33:41.906 [2024-04-23 16:32:40.732750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.732774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.740706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:33:41.906 [2024-04-23 16:32:40.741635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.741661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.749200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:33:41.906 [2024-04-23 16:32:40.749507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.749535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.758159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:33:41.906 [2024-04-23 16:32:40.758610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.758637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.766970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.906 [2024-04-23 16:32:40.767395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.767418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.775969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:33:41.906 [2024-04-23 16:32:40.776454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.776479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.787287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:33:41.906 [2024-04-23 16:32:40.787716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.787742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.796986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:33:41.906 [2024-04-23 16:32:40.797371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.797395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.806794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:33:41.906 [2024-04-23 16:32:40.807160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.807183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.816479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:33:41.906 [2024-04-23 16:32:40.816831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.816858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.826374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:33:41.906 [2024-04-23 16:32:40.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.906 [2024-04-23 16:32:40.826738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:41.906 [2024-04-23 16:32:40.836976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:33:42.165 [2024-04-23 16:32:40.837291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.165 [2024-04-23 16:32:40.837315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:42.165 [2024-04-23 16:32:40.847246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:33:42.165 [2024-04-23 16:32:40.847599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.165 [2024-04-23 16:32:40.847622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:42.165 [2024-04-23 16:32:40.857652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:33:42.165 [2024-04-23 16:32:40.857983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.165 [2024-04-23 16:32:40.858005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.165 [2024-04-23 16:32:40.867134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:33:42.165 [2024-04-23 16:32:40.867395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.165 [2024-04-23 16:32:40.867418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:42.165 [2024-04-23 16:32:40.875972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:33:42.165 [2024-04-23 16:32:40.876277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.165 [2024-04-23 16:32:40.876300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:42.165 [2024-04-23 16:32:40.884810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:33:42.165 [2024-04-23 16:32:40.885124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.885146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.893511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.894397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.894420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.902318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.903056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.903078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.911211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.911957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.911979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.920054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.920808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.920830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.928916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.929700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.937761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.938541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.938567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.946599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.947384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.947407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.955448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.956242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.956263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.964299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.965101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.965124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.973143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.973955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.973977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.981995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.982814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.982836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.990837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:40.991669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:40.991691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:40.999689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:41.000524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.000546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.008530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:33:42.166 [2024-04-23 16:32:41.009382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.009403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.019403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:33:42.166 [2024-04-23 16:32:41.020708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.020729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.026717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:33:42.166 [2024-04-23 16:32:41.027335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.027357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.036223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:33:42.166 [2024-04-23 16:32:41.037065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.044057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:33:42.166 [2024-04-23 16:32:41.044865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.044887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.052907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:33:42.166 [2024-04-23 16:32:41.053728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.053749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.061757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:33:42.166 [2024-04-23 16:32:41.062581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.062602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.070597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:33:42.166 [2024-04-23 16:32:41.071440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.071464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.079660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:33:42.166 [2024-04-23 16:32:41.080516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.080539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.166 [2024-04-23 16:32:41.089254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:33:42.166 [2024-04-23 16:32:41.089599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.166 [2024-04-23 16:32:41.089625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.098207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:33:42.425 [2024-04-23 16:32:41.098918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.098940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.106957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:33:42.425 [2024-04-23 16:32:41.108106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.108133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.115800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:33:42.425 [2024-04-23 16:32:41.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.124673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:33:42.425 [2024-04-23 16:32:41.125685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.125707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.133528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:33:42.425 [2024-04-23 16:32:41.134539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.142712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6020 00:33:42.425 [2024-04-23 16:32:41.143827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.143850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.152596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:33:42.425 [2024-04-23 16:32:41.153845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.153869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.163495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:33:42.425 [2024-04-23 16:32:41.164848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.164872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.175014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:33:42.425 [2024-04-23 16:32:41.176389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.176414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.186400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:33:42.425 [2024-04-23 16:32:41.187733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.187757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.197353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:33:42.425 [2024-04-23 16:32:41.198649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.198673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.207618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:33:42.425 [2024-04-23 16:32:41.208798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.208821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.216659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:33:42.425 [2024-04-23 16:32:41.217501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.217523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.225495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:33:42.425 [2024-04-23 16:32:41.226574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.226597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.235004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:33:42.425 [2024-04-23 16:32:41.235670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.235693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.242284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:33:42.425 [2024-04-23 16:32:41.243074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.243096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.251255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:33:42.425 [2024-04-23 16:32:41.251764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.251787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.260181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.260872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.269003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.269680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.269701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.277819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.278503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.286648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.287340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.287361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.295463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.296164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.296186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.304291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.305006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.305027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.313107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.313828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.313849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.321929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.322661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.322683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.330745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.331491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.331512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.339562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.340314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.340335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.425 [2024-04-23 16:32:41.348386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.425 [2024-04-23 16:32:41.349151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.425 [2024-04-23 16:32:41.349172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.357207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.685 [2024-04-23 16:32:41.357979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.358001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.366029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.685 [2024-04-23 16:32:41.366809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.366831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.374835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.685 [2024-04-23 16:32:41.375620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.383655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.685 [2024-04-23 16:32:41.384459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.384481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.392461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:33:42.685 [2024-04-23 16:32:41.393262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.400907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:33:42.685 [2024-04-23 16:32:41.401079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.401101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.409977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:33:42.685 [2024-04-23 16:32:41.410623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.410650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.418822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.419485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.419505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.427643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.428311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.428333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.436462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.437139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.437161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.445281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.445972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.445995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.454098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.454794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.454815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.462915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.463620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.463646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.471734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.472449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.472471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.480544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.481268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.685 [2024-04-23 16:32:41.481293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.685 [2024-04-23 16:32:41.489358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.685 [2024-04-23 16:32:41.490095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.498179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.498925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.498948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.506994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.507749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.515813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.516571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.516593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.524645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.525417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.525439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.533476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.534273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.542301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.543089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.543111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.551117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.551915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.551937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.559945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.560758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.560784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.568781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.569619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.577614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:42.686 [2024-04-23 16:32:41.578444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.585981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:33:42.686 [2024-04-23 16:32:41.586753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.586775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.594972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:33:42.686 [2024-04-23 16:32:41.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.595487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.603920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4f40 00:33:42.686 [2024-04-23 16:32:41.604571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.604593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.686 [2024-04-23 16:32:41.612771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.686 [2024-04-23 16:32:41.613431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.686 [2024-04-23 16:32:41.613451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.945 [2024-04-23 16:32:41.621612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.945 [2024-04-23 16:32:41.622287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.945 [2024-04-23 16:32:41.622309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.945 [2024-04-23 16:32:41.630441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.631122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.631143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.639264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.639954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.639976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.648091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.648789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.648810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.656920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.657624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.657649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.665732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.666447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.666469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.674548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.675276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.675298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.683370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.684130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.692198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.692944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.692966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.701016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.701793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.709838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.710622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.718669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.719446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.719467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.727502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.728282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.728304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.736323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:33:42.946 [2024-04-23 16:32:41.737114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.737136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.744728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:33:42.946 [2024-04-23 16:32:41.745487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.745509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.753722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:33:42.946 [2024-04-23 16:32:41.754209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.762663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:33:42.946 [2024-04-23 16:32:41.763309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.763332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.771498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.772150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.772172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.780326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.781012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.789165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.789837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.789859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.797998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.798678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.798699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.806831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.807519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.815672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.816371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.816394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.824506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.825211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.825233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.833339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.834058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.834079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.842164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.842889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.842911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.850996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.851732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.851753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.859833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.860577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.860605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:42.946 [2024-04-23 16:32:41.868662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:42.946 [2024-04-23 16:32:41.869416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.946 [2024-04-23 16:32:41.869439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.207 [2024-04-23 16:32:41.877494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:43.207 [2024-04-23 16:32:41.878261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.207 [2024-04-23 16:32:41.878282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.207 [2024-04-23 16:32:41.886329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:43.207 [2024-04-23 16:32:41.887104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.207 [2024-04-23 16:32:41.887126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.207 [2024-04-23 16:32:41.895156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:33:43.207 [2024-04-23 16:32:41.895939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.207 [2024-04-23 16:32:41.895961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:43.207 [2024-04-23 16:32:41.903613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:33:43.207 [2024-04-23 16:32:41.903767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.207 [2024-04-23 16:32:41.903793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:43.207 [2024-04-23 16:32:41.912692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:33:43.207 [2024-04-23 16:32:41.913317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.207 [2024-04-23 16:32:41.913340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.921527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6020 00:33:43.208 [2024-04-23 16:32:41.922161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.922183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.930359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.931006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.931028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.939195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.939851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.939873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.948028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.948693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.948715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.956852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.957522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.957543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.965685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.966367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.966389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.974510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.975200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.983350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.984055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.984078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:41.992238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:41.992952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:41.992974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.001073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.001794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.001816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.009908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.010642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.010667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.018739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.019474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.019495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.027578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.028333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.028354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.036432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.037188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.045262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.046061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:43.208 [2024-04-23 16:32:42.054097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:33:43.208 [2024-04-23 16:32:42.054873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.208 [2024-04-23 16:32:42.054895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:43.208 00:33:43.208 Latency(us) 00:33:43.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.208 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.208 nvme0n1 : 2.00 28341.26 110.71 0.00 0.00 4512.85 2207.53 14348.93 00:33:43.208 =================================================================================================================== 00:33:43.208 Total : 28341.26 110.71 0.00 0.00 4512.85 2207.53 14348.93 00:33:43.208 0 00:33:43.208 16:32:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:43.208 16:32:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:43.208 16:32:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:43.208 | .driver_specific 00:33:43.208 | .nvme_error 00:33:43.208 | .status_code 00:33:43.208 | .command_transient_transport_error' 00:33:43.208 16:32:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:43.468 16:32:42 -- host/digest.sh@71 -- # (( 222 > 0 )) 00:33:43.468 16:32:42 -- host/digest.sh@73 -- # killprocess 3318011 00:33:43.468 16:32:42 -- common/autotest_common.sh@926 -- # '[' -z 3318011 ']' 00:33:43.468 16:32:42 -- common/autotest_common.sh@930 -- # kill -0 3318011 00:33:43.468 16:32:42 -- common/autotest_common.sh@931 -- # uname 00:33:43.468 16:32:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:43.469 16:32:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3318011 00:33:43.469 16:32:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:43.469 16:32:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:43.469 16:32:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3318011' 00:33:43.469 killing process with pid 3318011 00:33:43.469 16:32:42 -- common/autotest_common.sh@945 -- # kill 3318011 00:33:43.469 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.469 00:33:43.469 Latency(us) 00:33:43.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.469 =================================================================================================================== 00:33:43.469 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.469 16:32:42 -- common/autotest_common.sh@950 -- # wait 3318011 00:33:43.728 16:32:42 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:33:43.728 16:32:42 -- host/digest.sh@54 -- # local rw bs qd 00:33:43.728 16:32:42 -- host/digest.sh@56 -- # rw=randwrite 00:33:43.728 16:32:42 -- host/digest.sh@56 -- # bs=131072 00:33:43.728 16:32:42 -- host/digest.sh@56 -- # qd=16 00:33:43.728 16:32:42 -- host/digest.sh@58 -- # bperfpid=3318815 00:33:43.728 16:32:42 -- host/digest.sh@60 -- # waitforlisten 3318815 /var/tmp/bperf.sock 00:33:43.728 16:32:42 -- common/autotest_common.sh@819 -- # '[' -z 3318815 ']' 00:33:43.728 16:32:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.728 16:32:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.728 16:32:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.728 16:32:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.728 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:33:43.729 16:32:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:43.989 [2024-04-23 16:32:42.717322] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:43.989 [2024-04-23 16:32:42.717474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318815 ] 00:33:43.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.989 Zero copy mechanism will not be used. 00:33:43.989 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.989 [2024-04-23 16:32:42.851076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.249 [2024-04-23 16:32:42.939364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.508 16:32:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:44.508 16:32:43 -- common/autotest_common.sh@852 -- # return 0 00:33:44.508 16:32:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.508 16:32:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.766 16:32:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:44.767 16:32:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.767 16:32:43 -- common/autotest_common.sh@10 -- # set +x 00:33:44.767 16:32:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.767 16:32:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.767 16:32:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.025 nvme0n1 00:33:45.025 16:32:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:45.025 16:32:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:45.025 16:32:43 -- common/autotest_common.sh@10 -- # set +x 00:33:45.025 16:32:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:45.025 16:32:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.025 16:32:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.283 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.283 Zero copy mechanism will not be used. 00:33:45.283 Running I/O for 2 seconds... 00:33:45.283 [2024-04-23 16:32:44.014265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.014900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.014945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.034960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.035436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.035472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.057504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.058112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.058147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.080102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.080718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.080751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.104095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.104655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.104688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.127610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.128290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.151932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.152570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.175068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.175833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.175864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.283 [2024-04-23 16:32:44.198508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.283 [2024-04-23 16:32:44.199131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.283 [2024-04-23 16:32:44.199166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.543 [2024-04-23 16:32:44.218655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.543 [2024-04-23 16:32:44.218943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.218979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.230477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.230801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.230830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.242341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.242655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.242685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.255030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.255383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.255417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.268028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.268290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.268318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.280514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.280769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.280795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.292369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.292747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.292776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.304497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.304820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.304850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.316932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.317230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.317262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.329313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.329612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.329653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.341938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.342340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.342369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.354463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.366239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.366651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.366680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.378744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.379162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.379192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.391378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.391680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.391729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.403987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.404392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.404428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.416364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.416694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.416725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.428910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.429241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.440789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.441001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.441028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.453859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.454149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.454177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.544 [2024-04-23 16:32:44.466162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.544 [2024-04-23 16:32:44.466433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.544 [2024-04-23 16:32:44.466468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.806 [2024-04-23 16:32:44.478396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.806 [2024-04-23 16:32:44.478686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-04-23 16:32:44.478719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.806 [2024-04-23 16:32:44.490853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.806 [2024-04-23 16:32:44.491123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-04-23 16:32:44.491152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.806 [2024-04-23 16:32:44.503564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.806 [2024-04-23 16:32:44.503978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-04-23 16:32:44.504012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.806 [2024-04-23 16:32:44.516049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.516363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.516394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.529045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.529346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.529377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.541096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.541318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.552542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.552955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.552985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.564937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.565326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.565355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.577400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.577773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.577812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.589425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.589779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.601963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.602296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.613690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.613999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.614032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.625684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.626041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.626071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.638030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.638321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.638347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.650972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.651315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.651350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.663644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.664007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.664039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.676456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.676769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.676796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.688688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.688969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.688997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.701472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.701917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.713368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.713642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.713668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.807 [2024-04-23 16:32:44.725757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:45.807 [2024-04-23 16:32:44.726050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.807 [2024-04-23 16:32:44.726081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.069 [2024-04-23 16:32:44.738538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.738954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.751408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.751704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.751733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.763547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.763839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.763870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.776262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.776497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.776524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.788522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.788851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.788879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.800808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.801179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.801214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.812887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.813182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.813222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.825451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.825717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.825746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.837673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.838056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.838086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.849223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.849464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.849493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.861794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.862182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.862210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.874490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.874707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.874736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.886622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.886874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.886901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.898783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.899137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.899167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.911493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.911890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.911920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.924439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.924724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.924750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.936751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.936998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.937028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.949101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.949509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.961850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.962203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.962238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.974833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.975211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.975241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.987241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.987589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.070 [2024-04-23 16:32:44.999250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.070 [2024-04-23 16:32:44.999494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.070 [2024-04-23 16:32:44.999523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.011195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.330 [2024-04-23 16:32:45.011443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.330 [2024-04-23 16:32:45.011469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.023397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.330 [2024-04-23 16:32:45.023735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.330 [2024-04-23 16:32:45.023768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.036388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.330 [2024-04-23 16:32:45.036661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.330 [2024-04-23 16:32:45.036688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.049068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.330 [2024-04-23 16:32:45.049520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.330 [2024-04-23 16:32:45.049555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.061691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.330 [2024-04-23 16:32:45.061967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.330 [2024-04-23 16:32:45.061992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.330 [2024-04-23 16:32:45.074106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.074482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.074512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.086536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.086905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.086935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.098959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.099308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.099337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.111217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.111526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.111555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.123535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.123786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.123817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.136347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.136735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.136769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.148460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.148795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.148822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.161097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.161359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.161386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.173103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.173442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.173472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.185584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.185891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.185922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.198365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.198689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.210880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.211179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.223307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.223684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.235969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.236269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.236298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.248248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.248585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.248616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.331 [2024-04-23 16:32:45.260559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.331 [2024-04-23 16:32:45.260865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.331 [2024-04-23 16:32:45.260898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.272831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.273120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.273151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.285698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.285967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.285997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.298873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.299344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.299376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.310579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.310788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.310814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.323150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.323591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.323626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.335335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.335581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.335610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.347333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.347726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.359346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.359688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.359717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.371326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.371581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.371609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.382802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.383170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.394529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.394744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.394771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.404514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.404758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.414782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.415011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.424709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.425057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.425085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.435500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.435790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.435823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.445938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.446171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.456844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.457126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.457153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.467045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.467389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.477550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.477775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.477803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.487331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.487507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.497164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.497381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.497415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.507068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.507348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.507375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.590 [2024-04-23 16:32:45.518201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.590 [2024-04-23 16:32:45.518508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.590 [2024-04-23 16:32:45.518537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.528750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.529046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.529073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.539694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.539945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.539986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.549556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.549807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.549834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.559818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.560149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.560176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.570205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.570431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.570460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.580242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.580551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.580580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.590575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.590797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.590829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.600751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.601033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.601060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.610900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.611013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.611041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.620887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.621171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.621199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.630497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.630728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.630767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.641167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.641435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.641464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.651387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.651703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.651731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.661860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.662123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.662152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.672244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.672493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.672519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.681586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.681859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.681892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.692462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.692737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.692765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.702196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.702509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.702541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.712763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.713057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.713085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.722543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.722817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.850 [2024-04-23 16:32:45.722846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.850 [2024-04-23 16:32:45.732483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.850 [2024-04-23 16:32:45.732736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.851 [2024-04-23 16:32:45.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.851 [2024-04-23 16:32:45.742427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.851 [2024-04-23 16:32:45.742704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.851 [2024-04-23 16:32:45.742733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.851 [2024-04-23 16:32:45.752696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.851 [2024-04-23 16:32:45.752892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.851 [2024-04-23 16:32:45.752921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.851 [2024-04-23 16:32:45.763049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.851 [2024-04-23 16:32:45.763230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.851 [2024-04-23 16:32:45.763259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.851 [2024-04-23 16:32:45.772682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:46.851 [2024-04-23 16:32:45.772887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.851 [2024-04-23 16:32:45.772918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.782783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.782975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.783001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.793067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.793318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.793345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.803870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.804100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.804134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.814425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.814637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.814666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.824839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.825099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.835044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.835348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.835380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.844620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.844806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.844833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.854563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.854912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.854940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.865129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.865345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.875413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.875587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.886021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.886346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.886380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.896220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.896429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.896458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.906338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.906579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.906609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.916193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.916532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.916561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.926844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.927156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.936428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.936770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.936797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.946614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.946896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.956783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.956986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.957013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.967063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.967313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.967338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.110 [2024-04-23 16:32:45.977318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.110 [2024-04-23 16:32:45.977655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.110 [2024-04-23 16:32:45.977689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.111 [2024-04-23 16:32:45.987237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:33:47.111 [2024-04-23 16:32:45.987467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.111 [2024-04-23 16:32:45.987498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.111 00:33:47.111 Latency(us) 00:33:47.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.111 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:47.111 nvme0n1 : 2.00 2534.70 316.84 0.00 0.00 6302.11 3483.76 24144.84 00:33:47.111 =================================================================================================================== 00:33:47.111 Total : 2534.70 316.84 0.00 0.00 6302.11 3483.76 24144.84 00:33:47.111 0 00:33:47.111 16:32:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.111 16:32:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.111 16:32:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.111 | .driver_specific 00:33:47.111 | .nvme_error 00:33:47.111 | .status_code 00:33:47.111 | .command_transient_transport_error' 00:33:47.111 16:32:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.369 16:32:46 -- host/digest.sh@71 -- # (( 163 > 0 )) 00:33:47.369 16:32:46 -- host/digest.sh@73 -- # killprocess 3318815 00:33:47.369 16:32:46 -- common/autotest_common.sh@926 -- # '[' -z 3318815 ']' 00:33:47.369 16:32:46 -- common/autotest_common.sh@930 -- # kill -0 3318815 00:33:47.369 16:32:46 -- common/autotest_common.sh@931 -- # uname 00:33:47.369 16:32:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:47.369 16:32:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3318815 00:33:47.369 16:32:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:47.369 16:32:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:47.369 16:32:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3318815' 00:33:47.369 killing process with pid 3318815 00:33:47.369 16:32:46 -- common/autotest_common.sh@945 -- # kill 3318815 00:33:47.369 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.369 00:33:47.369 Latency(us) 00:33:47.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.369 =================================================================================================================== 00:33:47.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.370 16:32:46 -- common/autotest_common.sh@950 -- # wait 3318815 00:33:47.629 16:32:46 -- host/digest.sh@115 -- # killprocess 3316350 00:33:47.629 16:32:46 -- common/autotest_common.sh@926 -- # '[' -z 3316350 ']' 00:33:47.629 16:32:46 -- common/autotest_common.sh@930 -- # kill -0 3316350 00:33:47.629 16:32:46 -- common/autotest_common.sh@931 -- # uname 00:33:47.629 16:32:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:47.629 16:32:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3316350 00:33:47.890 16:32:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:47.890 16:32:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:47.890 16:32:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3316350' 00:33:47.890 killing process with pid 3316350 00:33:47.890 16:32:46 -- common/autotest_common.sh@945 -- # kill 3316350 00:33:47.890 16:32:46 -- common/autotest_common.sh@950 -- # wait 3316350 00:33:48.150 00:33:48.150 real 0m16.915s 00:33:48.150 user 0m32.383s 00:33:48.150 sys 0m3.305s 00:33:48.150 16:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.150 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:33:48.150 ************************************ 00:33:48.150 END TEST nvmf_digest_error 00:33:48.150 ************************************ 00:33:48.150 16:32:47 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:33:48.150 16:32:47 -- host/digest.sh@139 -- # nvmftestfini 00:33:48.150 16:32:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:48.150 16:32:47 -- nvmf/common.sh@116 -- # sync 00:33:48.150 16:32:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:48.150 16:32:47 -- nvmf/common.sh@119 -- # set +e 00:33:48.150 16:32:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:48.150 16:32:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:48.150 rmmod nvme_tcp 00:33:48.412 rmmod nvme_fabrics 00:33:48.412 rmmod nvme_keyring 00:33:48.412 16:32:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:48.412 16:32:47 -- nvmf/common.sh@123 -- # set -e 00:33:48.412 16:32:47 -- nvmf/common.sh@124 -- # return 0 00:33:48.412 16:32:47 -- nvmf/common.sh@477 -- # '[' -n 3316350 ']' 00:33:48.412 16:32:47 -- nvmf/common.sh@478 -- # killprocess 3316350 00:33:48.412 16:32:47 -- common/autotest_common.sh@926 -- # '[' -z 3316350 ']' 00:33:48.412 16:32:47 -- common/autotest_common.sh@930 -- # kill -0 3316350 00:33:48.412 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3316350) - No such process 00:33:48.412 16:32:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3316350 is not found' 00:33:48.412 Process with pid 3316350 is not found 00:33:48.412 16:32:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:48.412 16:32:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:48.412 16:32:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:48.412 16:32:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:48.412 16:32:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:48.412 16:32:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.412 16:32:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.412 16:32:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.316 16:32:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:50.316 00:33:50.316 real 1m6.624s 00:33:50.317 user 1m36.122s 00:33:50.317 sys 0m10.950s 00:33:50.317 16:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:50.317 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 ************************************ 00:33:50.317 END TEST nvmf_digest 00:33:50.317 ************************************ 00:33:50.317 16:32:49 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:33:50.317 16:32:49 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:33:50.317 16:32:49 -- nvmf/nvmf.sh@119 -- # [[ phy-fallback == phy ]] 00:33:50.317 16:32:49 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:50.317 16:32:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:50.317 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.578 16:32:49 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:50.578 00:33:50.578 real 21m10.851s 00:33:50.578 user 58m7.397s 00:33:50.578 sys 4m53.644s 00:33:50.578 16:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:50.578 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.578 ************************************ 00:33:50.578 END TEST nvmf_tcp 00:33:50.578 ************************************ 00:33:50.578 16:32:49 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:50.578 16:32:49 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:50.578 16:32:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:50.578 16:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:50.578 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.578 ************************************ 00:33:50.578 START TEST spdkcli_nvmf_tcp 00:33:50.578 ************************************ 00:33:50.578 16:32:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:50.578 * Looking for test storage... 00:33:50.578 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:33:50.578 16:32:49 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:50.578 16:32:49 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.578 16:32:49 -- nvmf/common.sh@7 -- # uname -s 00:33:50.578 16:32:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.578 16:32:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.578 16:32:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.578 16:32:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.578 16:32:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.578 16:32:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.578 16:32:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.578 16:32:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.578 16:32:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.578 16:32:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.578 16:32:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:50.578 16:32:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:50.578 16:32:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.578 16:32:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.578 16:32:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:50.578 16:32:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:50.578 16:32:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.578 16:32:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.578 16:32:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.578 16:32:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.578 16:32:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.578 16:32:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.578 16:32:49 -- paths/export.sh@5 -- # export PATH 00:33:50.578 16:32:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.578 16:32:49 -- nvmf/common.sh@46 -- # : 0 00:33:50.578 16:32:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:50.578 16:32:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:50.578 16:32:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:50.578 16:32:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.578 16:32:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.578 16:32:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:50.578 16:32:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:50.578 16:32:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:50.578 16:32:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:50.578 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.578 16:32:49 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:50.578 16:32:49 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3320253 00:33:50.578 16:32:49 -- spdkcli/common.sh@34 -- # waitforlisten 3320253 00:33:50.578 16:32:49 -- common/autotest_common.sh@819 -- # '[' -z 3320253 ']' 00:33:50.578 16:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.578 16:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:50.578 16:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.578 16:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:50.578 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.578 16:32:49 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:50.578 [2024-04-23 16:32:49.490000] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:33:50.579 [2024-04-23 16:32:49.490156] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320253 ] 00:33:50.838 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.838 [2024-04-23 16:32:49.624924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:50.838 [2024-04-23 16:32:49.721289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:50.838 [2024-04-23 16:32:49.721548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.838 [2024-04-23 16:32:49.721548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.405 16:32:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:51.405 16:32:50 -- common/autotest_common.sh@852 -- # return 0 00:33:51.405 16:32:50 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:51.405 16:32:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:51.405 16:32:50 -- common/autotest_common.sh@10 -- # set +x 00:33:51.405 16:32:50 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:51.405 16:32:50 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:51.405 16:32:50 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:51.405 16:32:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:51.405 16:32:50 -- common/autotest_common.sh@10 -- # set +x 00:33:51.405 16:32:50 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:51.405 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:51.405 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:51.405 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:51.405 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:51.405 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:51.405 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:51.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.405 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:51.405 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:51.405 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:51.405 ' 00:33:51.664 [2024-04-23 16:32:50.516262] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:54.201 [2024-04-23 16:32:52.571720] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.161 [2024-04-23 16:32:53.733626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:57.072 [2024-04-23 16:32:55.864704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:58.977 [2024-04-23 16:32:57.695434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:00.384 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:00.384 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:00.384 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.384 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:00.384 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:00.384 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.384 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.385 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:00.385 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:00.385 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:00.385 16:32:59 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:00.385 16:32:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:00.385 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:34:00.385 16:32:59 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:00.385 16:32:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:00.385 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:34:00.385 16:32:59 -- spdkcli/nvmf.sh@69 -- # check_match 00:34:00.385 16:32:59 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:00.659 16:32:59 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:00.659 16:32:59 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:00.918 16:32:59 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:00.918 16:32:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:00.918 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:34:00.918 16:32:59 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:00.918 16:32:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:00.918 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:34:00.918 16:32:59 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:00.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:00.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:00.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:00.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:00.918 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:00.918 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:00.918 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:00.918 ' 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:06.201 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:06.201 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:06.201 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:06.201 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:06.201 16:33:04 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:06.202 16:33:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:06.202 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:34:06.202 16:33:04 -- spdkcli/nvmf.sh@90 -- # killprocess 3320253 00:34:06.202 16:33:04 -- common/autotest_common.sh@926 -- # '[' -z 3320253 ']' 00:34:06.202 16:33:04 -- common/autotest_common.sh@930 -- # kill -0 3320253 00:34:06.202 16:33:04 -- common/autotest_common.sh@931 -- # uname 00:34:06.202 16:33:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:06.202 16:33:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3320253 00:34:06.202 16:33:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:06.202 16:33:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:06.202 16:33:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3320253' 00:34:06.202 killing process with pid 3320253 00:34:06.202 16:33:04 -- common/autotest_common.sh@945 -- # kill 3320253 00:34:06.202 [2024-04-23 16:33:04.699827] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:06.202 16:33:04 -- common/autotest_common.sh@950 -- # wait 3320253 00:34:06.463 16:33:05 -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:06.463 16:33:05 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:06.463 16:33:05 -- spdkcli/common.sh@13 -- # '[' -n 3320253 ']' 00:34:06.463 16:33:05 -- spdkcli/common.sh@14 -- # killprocess 3320253 00:34:06.463 16:33:05 -- common/autotest_common.sh@926 -- # '[' -z 3320253 ']' 00:34:06.463 16:33:05 -- common/autotest_common.sh@930 -- # kill -0 3320253 00:34:06.463 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3320253) - No such process 00:34:06.463 16:33:05 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3320253 is not found' 00:34:06.463 Process with pid 3320253 is not found 00:34:06.463 16:33:05 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:06.463 16:33:05 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:06.463 16:33:05 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:06.463 00:34:06.463 real 0m15.859s 00:34:06.463 user 0m31.978s 00:34:06.463 sys 0m0.773s 00:34:06.463 16:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.463 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:34:06.463 ************************************ 00:34:06.463 END TEST spdkcli_nvmf_tcp 00:34:06.463 ************************************ 00:34:06.463 16:33:05 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:06.464 16:33:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:06.464 16:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:06.464 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:34:06.464 ************************************ 00:34:06.464 START TEST nvmf_identify_passthru 00:34:06.464 ************************************ 00:34:06.464 16:33:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:06.464 * Looking for test storage... 00:34:06.464 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:34:06.464 16:33:05 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.464 16:33:05 -- nvmf/common.sh@7 -- # uname -s 00:34:06.464 16:33:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.464 16:33:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.464 16:33:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.464 16:33:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.464 16:33:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.464 16:33:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.464 16:33:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.464 16:33:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.464 16:33:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.464 16:33:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.464 16:33:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:06.464 16:33:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:06.464 16:33:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.464 16:33:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.464 16:33:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:06.464 16:33:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:06.464 16:33:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.464 16:33:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.464 16:33:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.464 16:33:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@5 -- # export PATH 00:34:06.464 16:33:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- nvmf/common.sh@46 -- # : 0 00:34:06.464 16:33:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:06.464 16:33:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:06.464 16:33:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:06.464 16:33:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.464 16:33:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.464 16:33:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:06.464 16:33:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:06.464 16:33:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:06.464 16:33:05 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:06.464 16:33:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.464 16:33:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.464 16:33:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.464 16:33:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- paths/export.sh@5 -- # export PATH 00:34:06.464 16:33:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.464 16:33:05 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:06.464 16:33:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:06.464 16:33:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.464 16:33:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:06.464 16:33:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:06.464 16:33:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:06.464 16:33:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.464 16:33:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.464 16:33:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.464 16:33:05 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:34:06.464 16:33:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:06.464 16:33:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:06.464 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:34:11.743 16:33:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:11.743 16:33:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:11.743 16:33:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:11.743 16:33:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:11.743 16:33:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:11.743 16:33:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:11.743 16:33:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:11.743 16:33:10 -- nvmf/common.sh@294 -- # net_devs=() 00:34:11.743 16:33:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:11.743 16:33:10 -- nvmf/common.sh@295 -- # e810=() 00:34:11.743 16:33:10 -- nvmf/common.sh@295 -- # local -ga e810 00:34:11.743 16:33:10 -- nvmf/common.sh@296 -- # x722=() 00:34:11.743 16:33:10 -- nvmf/common.sh@296 -- # local -ga x722 00:34:11.743 16:33:10 -- nvmf/common.sh@297 -- # mlx=() 00:34:11.743 16:33:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:11.743 16:33:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.743 16:33:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:11.743 16:33:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:11.743 16:33:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:11.743 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:11.743 16:33:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:11.743 16:33:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:11.743 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:11.743 16:33:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:11.743 16:33:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.743 16:33:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.743 16:33:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:11.743 Found net devices under 0000:27:00.0: cvl_0_0 00:34:11.743 16:33:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.743 16:33:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:11.743 16:33:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.743 16:33:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.743 16:33:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:11.743 Found net devices under 0000:27:00.1: cvl_0_1 00:34:11.743 16:33:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.743 16:33:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:11.743 16:33:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:11.743 16:33:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:11.743 16:33:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.743 16:33:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.743 16:33:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.743 16:33:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:11.743 16:33:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.743 16:33:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.743 16:33:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:11.743 16:33:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.743 16:33:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.743 16:33:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:11.743 16:33:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:11.743 16:33:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.743 16:33:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.743 16:33:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.743 16:33:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.743 16:33:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:11.743 16:33:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.003 16:33:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.003 16:33:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.003 16:33:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:12.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:34:12.003 00:34:12.003 --- 10.0.0.2 ping statistics --- 00:34:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.003 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:34:12.003 16:33:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:34:12.003 00:34:12.003 --- 10.0.0.1 ping statistics --- 00:34:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.003 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:34:12.003 16:33:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.003 16:33:10 -- nvmf/common.sh@410 -- # return 0 00:34:12.003 16:33:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:34:12.003 16:33:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.003 16:33:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:12.003 16:33:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:12.003 16:33:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.003 16:33:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:12.003 16:33:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:12.003 16:33:10 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:12.003 16:33:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:12.003 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:34:12.003 16:33:10 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:12.003 16:33:10 -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:12.003 16:33:10 -- common/autotest_common.sh@1509 -- # local bdfs 00:34:12.003 16:33:10 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:12.003 16:33:10 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:12.003 16:33:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:12.003 16:33:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:12.003 16:33:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:12.003 16:33:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:12.003 16:33:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:12.264 16:33:10 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:34:12.264 16:33:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:34:12.264 16:33:10 -- common/autotest_common.sh@1512 -- # echo 0000:03:00.0 00:34:12.264 16:33:10 -- target/identify_passthru.sh@16 -- # bdf=0000:03:00.0 00:34:12.264 16:33:10 -- target/identify_passthru.sh@17 -- # '[' -z 0000:03:00.0 ']' 00:34:12.264 16:33:10 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:34:12.264 16:33:10 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:12.264 16:33:10 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:12.264 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.645 16:33:12 -- target/identify_passthru.sh@23 -- # nvme_serial_number=233442AA2262 00:34:13.645 16:33:12 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:34:13.645 16:33:12 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:13.645 16:33:12 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:13.645 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.022 16:33:13 -- target/identify_passthru.sh@24 -- # nvme_model_number=Micron_7450_MTFDKBA960TFR 00:34:15.022 16:33:13 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:15.022 16:33:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:15.022 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:34:15.022 16:33:13 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:15.022 16:33:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:15.022 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:34:15.022 16:33:13 -- target/identify_passthru.sh@31 -- # nvmfpid=3328014 00:34:15.022 16:33:13 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:15.022 16:33:13 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:15.022 16:33:13 -- target/identify_passthru.sh@35 -- # waitforlisten 3328014 00:34:15.022 16:33:13 -- common/autotest_common.sh@819 -- # '[' -z 3328014 ']' 00:34:15.022 16:33:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.022 16:33:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:15.022 16:33:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.022 16:33:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:15.022 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:34:15.022 [2024-04-23 16:33:13.606638] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:34:15.022 [2024-04-23 16:33:13.606709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.022 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.022 [2024-04-23 16:33:13.695154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.022 [2024-04-23 16:33:13.793360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:15.022 [2024-04-23 16:33:13.793533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.022 [2024-04-23 16:33:13.793547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.022 [2024-04-23 16:33:13.793556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.022 [2024-04-23 16:33:13.793716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.022 [2024-04-23 16:33:13.793754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.022 [2024-04-23 16:33:13.793854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.022 [2024-04-23 16:33:13.793864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.594 16:33:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:15.594 16:33:14 -- common/autotest_common.sh@852 -- # return 0 00:34:15.594 16:33:14 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:15.594 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.594 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.594 INFO: Log level set to 20 00:34:15.594 INFO: Requests: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "method": "nvmf_set_config", 00:34:15.594 "id": 1, 00:34:15.594 "params": { 00:34:15.594 "admin_cmd_passthru": { 00:34:15.594 "identify_ctrlr": true 00:34:15.594 } 00:34:15.594 } 00:34:15.594 } 00:34:15.594 00:34:15.594 INFO: response: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "id": 1, 00:34:15.594 "result": true 00:34:15.594 } 00:34:15.594 00:34:15.594 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:15.594 16:33:14 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:15.594 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.594 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.594 INFO: Setting log level to 20 00:34:15.594 INFO: Setting log level to 20 00:34:15.594 INFO: Log level set to 20 00:34:15.594 INFO: Log level set to 20 00:34:15.594 INFO: Requests: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "method": "framework_start_init", 00:34:15.594 "id": 1 00:34:15.594 } 00:34:15.594 00:34:15.594 INFO: Requests: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "method": "framework_start_init", 00:34:15.594 "id": 1 00:34:15.594 } 00:34:15.594 00:34:15.594 [2024-04-23 16:33:14.501236] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:15.594 INFO: response: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "id": 1, 00:34:15.594 "result": true 00:34:15.594 } 00:34:15.594 00:34:15.594 INFO: response: 00:34:15.594 { 00:34:15.594 "jsonrpc": "2.0", 00:34:15.594 "id": 1, 00:34:15.594 "result": true 00:34:15.594 } 00:34:15.594 00:34:15.594 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:15.594 16:33:14 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.594 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.594 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.594 INFO: Setting log level to 40 00:34:15.594 INFO: Setting log level to 40 00:34:15.594 INFO: Setting log level to 40 00:34:15.594 [2024-04-23 16:33:14.515702] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.594 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:15.594 16:33:14 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:15.594 16:33:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:15.594 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:15.853 16:33:14 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 00:34:15.853 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:15.853 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:16.111 Nvme0n1 00:34:16.111 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.111 16:33:14 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:16.111 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.111 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:16.111 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.111 16:33:14 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:16.111 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.111 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:16.111 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.111 16:33:14 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.111 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.111 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:16.111 [2024-04-23 16:33:14.962600] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.111 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.111 16:33:14 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:16.111 16:33:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.111 16:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:16.111 [2024-04-23 16:33:14.970254] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:16.111 [ 00:34:16.111 { 00:34:16.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:16.111 "subtype": "Discovery", 00:34:16.111 "listen_addresses": [], 00:34:16.111 "allow_any_host": true, 00:34:16.111 "hosts": [] 00:34:16.112 }, 00:34:16.112 { 00:34:16.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.112 "subtype": "NVMe", 00:34:16.112 "listen_addresses": [ 00:34:16.112 { 00:34:16.112 "transport": "TCP", 00:34:16.112 "trtype": "TCP", 00:34:16.112 "adrfam": "IPv4", 00:34:16.112 "traddr": "10.0.0.2", 00:34:16.112 "trsvcid": "4420" 00:34:16.112 } 00:34:16.112 ], 00:34:16.112 "allow_any_host": true, 00:34:16.112 "hosts": [], 00:34:16.112 "serial_number": "SPDK00000000000001", 00:34:16.112 "model_number": "SPDK bdev Controller", 00:34:16.112 "max_namespaces": 1, 00:34:16.112 "min_cntlid": 1, 00:34:16.112 "max_cntlid": 65519, 00:34:16.112 "namespaces": [ 00:34:16.112 { 00:34:16.112 "nsid": 1, 00:34:16.112 "bdev_name": "Nvme0n1", 00:34:16.112 "name": "Nvme0n1", 00:34:16.112 "nguid": "000000000000000100A0752342AA2262", 00:34:16.112 "uuid": "00000000-0000-0001-00a0-752342aa2262" 00:34:16.112 } 00:34:16.112 ] 00:34:16.112 } 00:34:16.112 ] 00:34:16.112 16:33:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.112 16:33:14 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:16.112 16:33:14 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:16.112 16:33:14 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:16.370 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.370 16:33:15 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=233442AA2262 00:34:16.370 16:33:15 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:16.370 16:33:15 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:16.370 16:33:15 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:16.370 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.629 16:33:15 -- target/identify_passthru.sh@61 -- # nvmf_model_number=Micron_7450_MTFDKBA960TFR 00:34:16.629 16:33:15 -- target/identify_passthru.sh@63 -- # '[' 233442AA2262 '!=' 233442AA2262 ']' 00:34:16.629 16:33:15 -- target/identify_passthru.sh@68 -- # '[' Micron_7450_MTFDKBA960TFR '!=' Micron_7450_MTFDKBA960TFR ']' 00:34:16.629 16:33:15 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.629 16:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.629 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:34:16.629 16:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.629 16:33:15 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:16.629 16:33:15 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:16.629 16:33:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:16.629 16:33:15 -- nvmf/common.sh@116 -- # sync 00:34:16.629 16:33:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:16.629 16:33:15 -- nvmf/common.sh@119 -- # set +e 00:34:16.629 16:33:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:16.629 16:33:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:16.629 rmmod nvme_tcp 00:34:16.629 rmmod nvme_fabrics 00:34:16.629 rmmod nvme_keyring 00:34:16.629 16:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:16.629 16:33:15 -- nvmf/common.sh@123 -- # set -e 00:34:16.629 16:33:15 -- nvmf/common.sh@124 -- # return 0 00:34:16.629 16:33:15 -- nvmf/common.sh@477 -- # '[' -n 3328014 ']' 00:34:16.629 16:33:15 -- nvmf/common.sh@478 -- # killprocess 3328014 00:34:16.629 16:33:15 -- common/autotest_common.sh@926 -- # '[' -z 3328014 ']' 00:34:16.629 16:33:15 -- common/autotest_common.sh@930 -- # kill -0 3328014 00:34:16.629 16:33:15 -- common/autotest_common.sh@931 -- # uname 00:34:16.629 16:33:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:16.629 16:33:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3328014 00:34:16.629 16:33:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:16.629 16:33:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:16.629 16:33:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3328014' 00:34:16.629 killing process with pid 3328014 00:34:16.629 16:33:15 -- common/autotest_common.sh@945 -- # kill 3328014 00:34:16.629 [2024-04-23 16:33:15.429823] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:16.629 16:33:15 -- common/autotest_common.sh@950 -- # wait 3328014 00:34:18.007 16:33:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:18.007 16:33:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:18.007 16:33:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:18.007 16:33:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.007 16:33:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:18.007 16:33:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.007 16:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:18.007 16:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.041 16:33:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:20.041 00:34:20.041 real 0m13.488s 00:34:20.041 user 0m14.016s 00:34:20.041 sys 0m5.044s 00:34:20.041 16:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:20.041 16:33:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.041 ************************************ 00:34:20.041 END TEST nvmf_identify_passthru 00:34:20.041 ************************************ 00:34:20.041 16:33:18 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:20.042 16:33:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:20.042 16:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:20.042 16:33:18 -- common/autotest_common.sh@10 -- # set +x 00:34:20.042 ************************************ 00:34:20.042 START TEST nvmf_dif 00:34:20.042 ************************************ 00:34:20.042 16:33:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:20.042 * Looking for test storage... 00:34:20.042 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:34:20.042 16:33:18 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.042 16:33:18 -- nvmf/common.sh@7 -- # uname -s 00:34:20.042 16:33:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.042 16:33:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.042 16:33:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.042 16:33:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.042 16:33:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.042 16:33:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.042 16:33:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.042 16:33:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.042 16:33:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.042 16:33:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.042 16:33:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:20.042 16:33:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:20.042 16:33:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.042 16:33:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.042 16:33:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:20.042 16:33:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:20.042 16:33:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.042 16:33:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.042 16:33:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.042 16:33:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.042 16:33:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.042 16:33:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.042 16:33:18 -- paths/export.sh@5 -- # export PATH 00:34:20.042 16:33:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.042 16:33:18 -- nvmf/common.sh@46 -- # : 0 00:34:20.042 16:33:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:20.042 16:33:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:20.042 16:33:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:20.042 16:33:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.042 16:33:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.042 16:33:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:20.042 16:33:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:20.042 16:33:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:20.042 16:33:18 -- target/dif.sh@15 -- # NULL_META=16 00:34:20.042 16:33:18 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:20.042 16:33:18 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:20.042 16:33:18 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:20.042 16:33:18 -- target/dif.sh@135 -- # nvmftestinit 00:34:20.042 16:33:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:20.042 16:33:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.042 16:33:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:20.042 16:33:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:20.042 16:33:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:20.042 16:33:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.042 16:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:20.042 16:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.042 16:33:18 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:34:20.042 16:33:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:20.042 16:33:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:20.042 16:33:18 -- common/autotest_common.sh@10 -- # set +x 00:34:25.319 16:33:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:25.319 16:33:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:25.319 16:33:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:25.319 16:33:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:25.319 16:33:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:25.319 16:33:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:25.319 16:33:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:25.319 16:33:24 -- nvmf/common.sh@294 -- # net_devs=() 00:34:25.319 16:33:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:25.319 16:33:24 -- nvmf/common.sh@295 -- # e810=() 00:34:25.319 16:33:24 -- nvmf/common.sh@295 -- # local -ga e810 00:34:25.319 16:33:24 -- nvmf/common.sh@296 -- # x722=() 00:34:25.319 16:33:24 -- nvmf/common.sh@296 -- # local -ga x722 00:34:25.319 16:33:24 -- nvmf/common.sh@297 -- # mlx=() 00:34:25.319 16:33:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:25.319 16:33:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.319 16:33:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:25.319 16:33:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:25.319 16:33:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:25.319 16:33:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:25.319 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:25.319 16:33:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:25.319 16:33:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:25.319 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:25.319 16:33:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:25.319 16:33:24 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:25.319 16:33:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.319 16:33:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:25.319 16:33:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.319 16:33:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:25.319 Found net devices under 0000:27:00.0: cvl_0_0 00:34:25.319 16:33:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.319 16:33:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:25.319 16:33:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.319 16:33:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:25.319 16:33:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.319 16:33:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:25.319 Found net devices under 0000:27:00.1: cvl_0_1 00:34:25.319 16:33:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.319 16:33:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:25.319 16:33:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:25.319 16:33:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:25.319 16:33:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:25.320 16:33:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:25.320 16:33:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.320 16:33:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.320 16:33:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.320 16:33:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:25.320 16:33:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.320 16:33:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.320 16:33:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:25.320 16:33:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.320 16:33:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.320 16:33:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:25.320 16:33:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:25.320 16:33:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.320 16:33:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.579 16:33:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.579 16:33:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.579 16:33:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:25.579 16:33:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.579 16:33:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.838 16:33:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.838 16:33:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:25.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:34:25.838 00:34:25.838 --- 10.0.0.2 ping statistics --- 00:34:25.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.838 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:25.839 16:33:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.482 ms 00:34:25.839 00:34:25.839 --- 10.0.0.1 ping statistics --- 00:34:25.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.839 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:34:25.839 16:33:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.839 16:33:24 -- nvmf/common.sh@410 -- # return 0 00:34:25.839 16:33:24 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:25.839 16:33:24 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:34:28.371 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:34:28.371 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:34:28.371 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:34:28.371 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:34:28.371 16:33:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.371 16:33:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:28.371 16:33:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:28.371 16:33:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.371 16:33:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:28.371 16:33:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:28.371 16:33:27 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:28.371 16:33:27 -- target/dif.sh@137 -- # nvmfappstart 00:34:28.371 16:33:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:28.371 16:33:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:28.371 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:34:28.371 16:33:27 -- nvmf/common.sh@469 -- # nvmfpid=3333881 00:34:28.371 16:33:27 -- nvmf/common.sh@470 -- # waitforlisten 3333881 00:34:28.371 16:33:27 -- common/autotest_common.sh@819 -- # '[' -z 3333881 ']' 00:34:28.371 16:33:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.371 16:33:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:28.371 16:33:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.371 16:33:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:28.371 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:34:28.371 16:33:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:28.371 [2024-04-23 16:33:27.272477] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:34:28.371 [2024-04-23 16:33:27.272582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.630 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.630 [2024-04-23 16:33:27.397527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.630 [2024-04-23 16:33:27.494691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:28.630 [2024-04-23 16:33:27.494877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.630 [2024-04-23 16:33:27.494892] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.630 [2024-04-23 16:33:27.494903] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.630 [2024-04-23 16:33:27.494931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.199 16:33:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:29.199 16:33:27 -- common/autotest_common.sh@852 -- # return 0 00:34:29.199 16:33:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:29.199 16:33:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:29.199 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 16:33:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.199 16:33:27 -- target/dif.sh@139 -- # create_transport 00:34:29.199 16:33:27 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:29.199 16:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.199 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 [2024-04-23 16:33:28.000692] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.199 16:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.199 16:33:28 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:29.199 16:33:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:29.199 16:33:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:29.199 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 ************************************ 00:34:29.199 START TEST fio_dif_1_default 00:34:29.199 ************************************ 00:34:29.199 16:33:28 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:29.199 16:33:28 -- target/dif.sh@86 -- # create_subsystems 0 00:34:29.199 16:33:28 -- target/dif.sh@28 -- # local sub 00:34:29.199 16:33:28 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.199 16:33:28 -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.199 16:33:28 -- target/dif.sh@18 -- # local sub_id=0 00:34:29.199 16:33:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.199 16:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.199 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 bdev_null0 00:34:29.199 16:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.199 16:33:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.199 16:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.199 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 16:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.199 16:33:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.199 16:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.199 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 16:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.199 16:33:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.199 16:33:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.199 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:34:29.199 [2024-04-23 16:33:28.040831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.199 16:33:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.199 16:33:28 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.199 16:33:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.199 16:33:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.199 16:33:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:29.199 16:33:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.199 16:33:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:29.199 16:33:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.199 16:33:28 -- common/autotest_common.sh@1320 -- # shift 00:34:29.199 16:33:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:29.199 16:33:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.199 16:33:28 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.199 16:33:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.199 16:33:28 -- nvmf/common.sh@520 -- # config=() 00:34:29.199 16:33:28 -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.199 16:33:28 -- nvmf/common.sh@520 -- # local subsystem config 00:34:29.199 16:33:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:29.199 16:33:28 -- target/dif.sh@54 -- # local file 00:34:29.199 16:33:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:29.199 { 00:34:29.199 "params": { 00:34:29.199 "name": "Nvme$subsystem", 00:34:29.199 "trtype": "$TEST_TRANSPORT", 00:34:29.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.199 "adrfam": "ipv4", 00:34:29.199 "trsvcid": "$NVMF_PORT", 00:34:29.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.199 "hdgst": ${hdgst:-false}, 00:34:29.199 "ddgst": ${ddgst:-false} 00:34:29.199 }, 00:34:29.199 "method": "bdev_nvme_attach_controller" 00:34:29.199 } 00:34:29.199 EOF 00:34:29.199 )") 00:34:29.199 16:33:28 -- target/dif.sh@56 -- # cat 00:34:29.199 16:33:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.199 16:33:28 -- nvmf/common.sh@542 -- # cat 00:34:29.199 16:33:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:29.199 16:33:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:29.199 16:33:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.199 16:33:28 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.199 16:33:28 -- nvmf/common.sh@544 -- # jq . 00:34:29.199 16:33:28 -- nvmf/common.sh@545 -- # IFS=, 00:34:29.199 16:33:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:29.199 "params": { 00:34:29.199 "name": "Nvme0", 00:34:29.199 "trtype": "tcp", 00:34:29.199 "traddr": "10.0.0.2", 00:34:29.199 "adrfam": "ipv4", 00:34:29.199 "trsvcid": "4420", 00:34:29.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.200 "hdgst": false, 00:34:29.200 "ddgst": false 00:34:29.200 }, 00:34:29.200 "method": "bdev_nvme_attach_controller" 00:34:29.200 }' 00:34:29.200 16:33:28 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:29.200 16:33:28 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:29.200 16:33:28 -- common/autotest_common.sh@1326 -- # break 00:34:29.200 16:33:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.200 16:33:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.788 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.788 fio-3.35 00:34:29.788 Starting 1 thread 00:34:29.788 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.355 [2024-04-23 16:33:29.287417] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:30.355 [2024-04-23 16:33:29.287481] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:42.569 00:34:42.569 filename0: (groupid=0, jobs=1): err= 0: pid=3334535: Tue Apr 23 16:33:39 2024 00:34:42.569 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:34:42.569 slat (nsec): min=5929, max=32965, avg=7494.60, stdev=2272.71 00:34:42.569 clat (usec): min=40839, max=42997, avg=41976.77, stdev=125.89 00:34:42.569 lat (usec): min=40847, max=43004, avg=41984.26, stdev=125.88 00:34:42.569 clat percentiles (usec): 00:34:42.569 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:34:42.569 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:42.569 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:42.569 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:42.569 | 99.99th=[43254] 00:34:42.569 bw ( KiB/s): min= 352, max= 384, per=99.75%, avg=380.80, stdev= 9.85, samples=20 00:34:42.569 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:34:42.569 lat (msec) : 50=100.00% 00:34:42.569 cpu : usr=95.98%, sys=3.72%, ctx=14, majf=0, minf=1637 00:34:42.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.569 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:42.569 00:34:42.569 Run status group 0 (all jobs): 00:34:42.569 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10038-10038msec 00:34:42.569 ----------------------------------------------------- 00:34:42.569 Suppressions used: 00:34:42.569 count bytes template 00:34:42.569 1 8 /usr/src/fio/parse.c 00:34:42.569 1 8 libtcmalloc_minimal.so 00:34:42.569 1 904 libcrypto.so 00:34:42.569 ----------------------------------------------------- 00:34:42.569 00:34:42.569 16:33:40 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:42.569 16:33:40 -- target/dif.sh@43 -- # local sub 00:34:42.569 16:33:40 -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.569 16:33:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.569 16:33:40 -- target/dif.sh@36 -- # local sub_id=0 00:34:42.569 16:33:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 16:33:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 00:34:42.569 real 0m12.217s 00:34:42.569 user 0m30.157s 00:34:42.569 sys 0m0.880s 00:34:42.569 16:33:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 ************************************ 00:34:42.569 END TEST fio_dif_1_default 00:34:42.569 ************************************ 00:34:42.569 16:33:40 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:42.569 16:33:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:42.569 16:33:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 ************************************ 00:34:42.569 START TEST fio_dif_1_multi_subsystems 00:34:42.569 ************************************ 00:34:42.569 16:33:40 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:42.569 16:33:40 -- target/dif.sh@92 -- # local files=1 00:34:42.569 16:33:40 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:42.569 16:33:40 -- target/dif.sh@28 -- # local sub 00:34:42.569 16:33:40 -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.569 16:33:40 -- target/dif.sh@31 -- # create_subsystem 0 00:34:42.569 16:33:40 -- target/dif.sh@18 -- # local sub_id=0 00:34:42.569 16:33:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 bdev_null0 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 16:33:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 16:33:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 16:33:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:42.569 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.569 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.569 [2024-04-23 16:33:40.300531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.569 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.569 16:33:40 -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.569 16:33:40 -- target/dif.sh@31 -- # create_subsystem 1 00:34:42.570 16:33:40 -- target/dif.sh@18 -- # local sub_id=1 00:34:42.570 16:33:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:42.570 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.570 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 bdev_null1 00:34:42.570 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.570 16:33:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:42.570 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.570 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.570 16:33:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:42.570 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.570 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.570 16:33:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.570 16:33:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:42.570 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 16:33:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:42.570 16:33:40 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:42.570 16:33:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.570 16:33:40 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:42.570 16:33:40 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.570 16:33:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:42.570 16:33:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.570 16:33:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:42.570 16:33:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:42.570 16:33:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.570 16:33:40 -- common/autotest_common.sh@1320 -- # shift 00:34:42.570 16:33:40 -- nvmf/common.sh@520 -- # config=() 00:34:42.570 16:33:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:42.570 16:33:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.570 16:33:40 -- nvmf/common.sh@520 -- # local subsystem config 00:34:42.570 16:33:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:42.570 16:33:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:42.570 { 00:34:42.570 "params": { 00:34:42.570 "name": "Nvme$subsystem", 00:34:42.570 "trtype": "$TEST_TRANSPORT", 00:34:42.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.570 "adrfam": "ipv4", 00:34:42.570 "trsvcid": "$NVMF_PORT", 00:34:42.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.570 "hdgst": ${hdgst:-false}, 00:34:42.570 "ddgst": ${ddgst:-false} 00:34:42.570 }, 00:34:42.570 "method": "bdev_nvme_attach_controller" 00:34:42.570 } 00:34:42.570 EOF 00:34:42.570 )") 00:34:42.570 16:33:40 -- target/dif.sh@82 -- # gen_fio_conf 00:34:42.570 16:33:40 -- target/dif.sh@54 -- # local file 00:34:42.570 16:33:40 -- target/dif.sh@56 -- # cat 00:34:42.570 16:33:40 -- nvmf/common.sh@542 -- # cat 00:34:42.570 16:33:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.570 16:33:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:42.570 16:33:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:42.570 16:33:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:42.570 16:33:40 -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.570 16:33:40 -- target/dif.sh@73 -- # cat 00:34:42.570 16:33:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:42.570 16:33:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:42.570 { 00:34:42.570 "params": { 00:34:42.570 "name": "Nvme$subsystem", 00:34:42.570 "trtype": "$TEST_TRANSPORT", 00:34:42.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.570 "adrfam": "ipv4", 00:34:42.570 "trsvcid": "$NVMF_PORT", 00:34:42.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.570 "hdgst": ${hdgst:-false}, 00:34:42.570 "ddgst": ${ddgst:-false} 00:34:42.570 }, 00:34:42.570 "method": "bdev_nvme_attach_controller" 00:34:42.570 } 00:34:42.570 EOF 00:34:42.570 )") 00:34:42.570 16:33:40 -- nvmf/common.sh@542 -- # cat 00:34:42.570 16:33:40 -- target/dif.sh@72 -- # (( file++ )) 00:34:42.570 16:33:40 -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.570 16:33:40 -- nvmf/common.sh@544 -- # jq . 00:34:42.570 16:33:40 -- nvmf/common.sh@545 -- # IFS=, 00:34:42.570 16:33:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:42.570 "params": { 00:34:42.570 "name": "Nvme0", 00:34:42.570 "trtype": "tcp", 00:34:42.570 "traddr": "10.0.0.2", 00:34:42.570 "adrfam": "ipv4", 00:34:42.570 "trsvcid": "4420", 00:34:42.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.570 "hdgst": false, 00:34:42.570 "ddgst": false 00:34:42.570 }, 00:34:42.570 "method": "bdev_nvme_attach_controller" 00:34:42.570 },{ 00:34:42.570 "params": { 00:34:42.570 "name": "Nvme1", 00:34:42.570 "trtype": "tcp", 00:34:42.570 "traddr": "10.0.0.2", 00:34:42.570 "adrfam": "ipv4", 00:34:42.570 "trsvcid": "4420", 00:34:42.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:42.570 "hdgst": false, 00:34:42.570 "ddgst": false 00:34:42.570 }, 00:34:42.570 "method": "bdev_nvme_attach_controller" 00:34:42.570 }' 00:34:42.570 16:33:40 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:42.570 16:33:40 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:42.570 16:33:40 -- common/autotest_common.sh@1326 -- # break 00:34:42.570 16:33:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:42.570 16:33:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.570 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.570 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:42.570 fio-3.35 00:34:42.570 Starting 2 threads 00:34:42.570 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.828 [2024-04-23 16:33:41.605197] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:42.828 [2024-04-23 16:33:41.605263] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:55.036 00:34:55.036 filename0: (groupid=0, jobs=1): err= 0: pid=3337076: Tue Apr 23 16:33:51 2024 00:34:55.036 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10018msec) 00:34:55.036 slat (nsec): min=4085, max=29945, avg=6611.72, stdev=1109.94 00:34:55.036 clat (usec): min=881, max=43076, avg=21570.48, stdev=20157.57 00:34:55.036 lat (usec): min=890, max=43106, avg=21577.10, stdev=20157.29 00:34:55.036 clat percentiles (usec): 00:34:55.036 | 1.00th=[ 1237], 5.00th=[ 1287], 10.00th=[ 1287], 20.00th=[ 1336], 00:34:55.036 | 30.00th=[ 1352], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41681], 00:34:55.036 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:55.036 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:34:55.036 | 99.99th=[43254] 00:34:55.036 bw ( KiB/s): min= 672, max= 768, per=66.05%, avg=740.80, stdev=33.28, samples=20 00:34:55.036 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:34:55.036 lat (usec) : 1000=0.22% 00:34:55.036 lat (msec) : 2=49.57%, 50=50.22% 00:34:55.036 cpu : usr=98.81%, sys=0.90%, ctx=34, majf=0, minf=1634 00:34:55.036 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.036 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.036 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:55.036 filename1: (groupid=0, jobs=1): err= 0: pid=3337077: Tue Apr 23 16:33:51 2024 00:34:55.036 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:34:55.036 slat (nsec): min=3959, max=31170, avg=7050.73, stdev=1553.55 00:34:55.036 clat (usec): min=41717, max=43184, avg=41985.04, stdev=89.84 00:34:55.036 lat (usec): min=41723, max=43215, avg=41992.09, stdev=90.24 00:34:55.036 clat percentiles (usec): 00:34:55.036 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:34:55.036 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:55.036 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:55.036 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:55.036 | 99.99th=[43254] 00:34:55.036 bw ( KiB/s): min= 352, max= 384, per=33.92%, avg=380.80, stdev= 9.85, samples=20 00:34:55.036 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:34:55.037 lat (msec) : 50=100.00% 00:34:55.037 cpu : usr=98.55%, sys=1.19%, ctx=19, majf=0, minf=1637 00:34:55.037 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:55.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.037 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.037 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:55.037 00:34:55.037 Run status group 0 (all jobs): 00:34:55.037 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10018-10040msec 00:34:55.037 ----------------------------------------------------- 00:34:55.037 Suppressions used: 00:34:55.037 count bytes template 00:34:55.037 2 16 /usr/src/fio/parse.c 00:34:55.037 1 8 libtcmalloc_minimal.so 00:34:55.037 1 904 libcrypto.so 00:34:55.037 ----------------------------------------------------- 00:34:55.037 00:34:55.037 16:33:52 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:55.037 16:33:52 -- target/dif.sh@43 -- # local sub 00:34:55.037 16:33:52 -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.037 16:33:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:55.037 16:33:52 -- target/dif.sh@36 -- # local sub_id=0 00:34:55.037 16:33:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.037 16:33:52 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:55.037 16:33:52 -- target/dif.sh@36 -- # local sub_id=1 00:34:55.037 16:33:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 00:34:55.037 real 0m12.264s 00:34:55.037 user 0m33.693s 00:34:55.037 sys 0m0.734s 00:34:55.037 16:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 ************************************ 00:34:55.037 END TEST fio_dif_1_multi_subsystems 00:34:55.037 ************************************ 00:34:55.037 16:33:52 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:55.037 16:33:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:55.037 16:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 ************************************ 00:34:55.037 START TEST fio_dif_rand_params 00:34:55.037 ************************************ 00:34:55.037 16:33:52 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:55.037 16:33:52 -- target/dif.sh@100 -- # local NULL_DIF 00:34:55.037 16:33:52 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:55.037 16:33:52 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:55.037 16:33:52 -- target/dif.sh@103 -- # bs=128k 00:34:55.037 16:33:52 -- target/dif.sh@103 -- # numjobs=3 00:34:55.037 16:33:52 -- target/dif.sh@103 -- # iodepth=3 00:34:55.037 16:33:52 -- target/dif.sh@103 -- # runtime=5 00:34:55.037 16:33:52 -- target/dif.sh@105 -- # create_subsystems 0 00:34:55.037 16:33:52 -- target/dif.sh@28 -- # local sub 00:34:55.037 16:33:52 -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.037 16:33:52 -- target/dif.sh@31 -- # create_subsystem 0 00:34:55.037 16:33:52 -- target/dif.sh@18 -- # local sub_id=0 00:34:55.037 16:33:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 bdev_null0 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.037 16:33:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:55.037 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:34:55.037 [2024-04-23 16:33:52.595784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.037 16:33:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:55.037 16:33:52 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:55.037 16:33:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.037 16:33:52 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.037 16:33:52 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:55.037 16:33:52 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:55.037 16:33:52 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:55.037 16:33:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:55.037 16:33:52 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:55.037 16:33:52 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.037 16:33:52 -- common/autotest_common.sh@1320 -- # shift 00:34:55.037 16:33:52 -- nvmf/common.sh@520 -- # config=() 00:34:55.037 16:33:52 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:55.037 16:33:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.037 16:33:52 -- nvmf/common.sh@520 -- # local subsystem config 00:34:55.037 16:33:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:55.037 16:33:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:55.037 { 00:34:55.037 "params": { 00:34:55.037 "name": "Nvme$subsystem", 00:34:55.037 "trtype": "$TEST_TRANSPORT", 00:34:55.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.037 "adrfam": "ipv4", 00:34:55.037 "trsvcid": "$NVMF_PORT", 00:34:55.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.037 "hdgst": ${hdgst:-false}, 00:34:55.037 "ddgst": ${ddgst:-false} 00:34:55.037 }, 00:34:55.037 "method": "bdev_nvme_attach_controller" 00:34:55.037 } 00:34:55.037 EOF 00:34:55.037 )") 00:34:55.037 16:33:52 -- target/dif.sh@82 -- # gen_fio_conf 00:34:55.037 16:33:52 -- target/dif.sh@54 -- # local file 00:34:55.037 16:33:52 -- target/dif.sh@56 -- # cat 00:34:55.037 16:33:52 -- nvmf/common.sh@542 -- # cat 00:34:55.037 16:33:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.037 16:33:52 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:55.037 16:33:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:55.037 16:33:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:55.037 16:33:52 -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.037 16:33:52 -- nvmf/common.sh@544 -- # jq . 00:34:55.037 16:33:52 -- nvmf/common.sh@545 -- # IFS=, 00:34:55.037 16:33:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:55.037 "params": { 00:34:55.037 "name": "Nvme0", 00:34:55.037 "trtype": "tcp", 00:34:55.037 "traddr": "10.0.0.2", 00:34:55.037 "adrfam": "ipv4", 00:34:55.037 "trsvcid": "4420", 00:34:55.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.037 "hdgst": false, 00:34:55.037 "ddgst": false 00:34:55.037 }, 00:34:55.037 "method": "bdev_nvme_attach_controller" 00:34:55.037 }' 00:34:55.037 16:33:52 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:55.037 16:33:52 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:55.037 16:33:52 -- common/autotest_common.sh@1326 -- # break 00:34:55.037 16:33:52 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:55.037 16:33:52 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.037 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:55.037 ... 00:34:55.037 fio-3.35 00:34:55.037 Starting 3 threads 00:34:55.037 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.037 [2024-04-23 16:33:53.629537] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:55.037 [2024-04-23 16:33:53.629615] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:00.308 00:35:00.309 filename0: (groupid=0, jobs=1): err= 0: pid=3339615: Tue Apr 23 16:33:58 2024 00:35:00.309 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(133MiB/5045msec) 00:35:00.309 slat (nsec): min=5309, max=25838, avg=8268.32, stdev=2349.01 00:35:00.309 clat (usec): min=3546, max=50960, avg=14265.39, stdev=16179.38 00:35:00.309 lat (usec): min=3553, max=50966, avg=14273.66, stdev=16179.50 00:35:00.309 clat percentiles (usec): 00:35:00.309 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5538], 00:35:00.309 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7504], 00:35:00.309 | 70.00th=[ 7963], 80.00th=[ 8979], 90.00th=[47449], 95.00th=[48497], 00:35:00.309 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:35:00.309 | 99.99th=[51119] 00:35:00.309 bw ( KiB/s): min=13056, max=45824, per=26.21%, avg=27059.20, stdev=8808.82, samples=10 00:35:00.309 iops : min= 102, max= 358, avg=211.40, stdev=68.82, samples=10 00:35:00.309 lat (msec) : 4=1.04%, 10=80.00%, 50=18.49%, 100=0.47% 00:35:00.309 cpu : usr=97.15%, sys=2.52%, ctx=10, majf=0, minf=1635 00:35:00.309 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 issued rwts: total=1060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.309 filename0: (groupid=0, jobs=1): err= 0: pid=3339616: Tue Apr 23 16:33:58 2024 00:35:00.309 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(168MiB/5003msec) 00:35:00.309 slat (nsec): min=5988, max=24545, avg=8740.43, stdev=2575.95 00:35:00.309 clat (usec): min=4404, max=93458, avg=11138.60, stdev=13037.81 00:35:00.309 lat (usec): min=4411, max=93467, avg=11147.35, stdev=13038.02 00:35:00.309 clat percentiles (usec): 00:35:00.309 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5800], 00:35:00.309 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7570], 00:35:00.309 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[49021], 00:35:00.309 | 99.00th=[51643], 99.50th=[52167], 99.90th=[93848], 99.95th=[93848], 00:35:00.309 | 99.99th=[93848] 00:35:00.309 bw ( KiB/s): min=23040, max=43264, per=33.47%, avg=34560.00, stdev=8558.79, samples=9 00:35:00.309 iops : min= 180, max= 338, avg=270.00, stdev=66.87, samples=9 00:35:00.309 lat (msec) : 10=86.03%, 20=5.05%, 50=5.57%, 100=3.34% 00:35:00.309 cpu : usr=96.90%, sys=2.78%, ctx=7, majf=0, minf=1638 00:35:00.309 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.309 filename0: (groupid=0, jobs=1): err= 0: pid=3339617: Tue Apr 23 16:33:58 2024 00:35:00.309 read: IOPS=329, BW=41.2MiB/s (43.2MB/s)(208MiB/5046msec) 00:35:00.309 slat (nsec): min=6015, max=30358, avg=7775.03, stdev=2045.74 00:35:00.309 clat (usec): min=3737, max=90445, avg=9085.16, stdev=10251.98 00:35:00.309 lat (usec): min=3744, max=90452, avg=9092.94, stdev=10252.23 00:35:00.309 clat percentiles (usec): 00:35:00.309 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 5014], 00:35:00.309 | 30.00th=[ 5538], 40.00th=[ 6128], 50.00th=[ 6521], 60.00th=[ 6849], 00:35:00.309 | 70.00th=[ 7439], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[47449], 00:35:00.309 | 99.00th=[50070], 99.50th=[50594], 99.90th=[59507], 99.95th=[90702], 00:35:00.309 | 99.99th=[90702] 00:35:00.309 bw ( KiB/s): min=31232, max=52224, per=41.19%, avg=42528.40, stdev=6983.47, samples=10 00:35:00.309 iops : min= 244, max= 408, avg=332.20, stdev=54.63, samples=10 00:35:00.309 lat (msec) : 4=0.18%, 10=92.13%, 20=1.80%, 50=4.87%, 100=1.02% 00:35:00.309 cpu : usr=96.73%, sys=2.91%, ctx=6, majf=0, minf=1634 00:35:00.309 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.309 issued rwts: total=1664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.309 00:35:00.309 Run status group 0 (all jobs): 00:35:00.309 READ: bw=101MiB/s (106MB/s), 26.3MiB/s-41.2MiB/s (27.5MB/s-43.2MB/s), io=509MiB (533MB), run=5003-5046msec 00:35:00.568 ----------------------------------------------------- 00:35:00.568 Suppressions used: 00:35:00.568 count bytes template 00:35:00.568 5 44 /usr/src/fio/parse.c 00:35:00.568 1 8 libtcmalloc_minimal.so 00:35:00.568 1 904 libcrypto.so 00:35:00.568 ----------------------------------------------------- 00:35:00.568 00:35:00.568 16:33:59 -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:00.568 16:33:59 -- target/dif.sh@43 -- # local sub 00:35:00.568 16:33:59 -- target/dif.sh@45 -- # for sub in "$@" 00:35:00.568 16:33:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:00.568 16:33:59 -- target/dif.sh@36 -- # local sub_id=0 00:35:00.568 16:33:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # NULL_DIF=2 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # bs=4k 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # numjobs=8 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # iodepth=16 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # runtime= 00:35:00.568 16:33:59 -- target/dif.sh@109 -- # files=2 00:35:00.568 16:33:59 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:00.568 16:33:59 -- target/dif.sh@28 -- # local sub 00:35:00.568 16:33:59 -- target/dif.sh@30 -- # for sub in "$@" 00:35:00.568 16:33:59 -- target/dif.sh@31 -- # create_subsystem 0 00:35:00.568 16:33:59 -- target/dif.sh@18 -- # local sub_id=0 00:35:00.568 16:33:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 bdev_null0 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 [2024-04-23 16:33:59.375409] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@30 -- # for sub in "$@" 00:35:00.568 16:33:59 -- target/dif.sh@31 -- # create_subsystem 1 00:35:00.568 16:33:59 -- target/dif.sh@18 -- # local sub_id=1 00:35:00.568 16:33:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 bdev_null1 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.568 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.568 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.568 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.568 16:33:59 -- target/dif.sh@30 -- # for sub in "$@" 00:35:00.568 16:33:59 -- target/dif.sh@31 -- # create_subsystem 2 00:35:00.568 16:33:59 -- target/dif.sh@18 -- # local sub_id=2 00:35:00.569 16:33:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:00.569 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.569 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.569 bdev_null2 00:35:00.569 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.569 16:33:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:00.569 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.569 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.569 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.569 16:33:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:00.569 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.569 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.569 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.569 16:33:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:00.569 16:33:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:00.569 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:35:00.569 16:33:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:00.569 16:33:59 -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:00.569 16:33:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:00.569 16:33:59 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:00.569 16:33:59 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:00.569 16:33:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:00.569 16:33:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:00.569 16:33:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:00.569 16:33:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:00.569 16:33:59 -- nvmf/common.sh@520 -- # config=() 00:35:00.569 16:33:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:00.569 16:33:59 -- common/autotest_common.sh@1320 -- # shift 00:35:00.569 16:33:59 -- nvmf/common.sh@520 -- # local subsystem config 00:35:00.569 16:33:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:00.569 16:33:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:00.569 16:33:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:00.569 { 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme$subsystem", 00:35:00.569 "trtype": "$TEST_TRANSPORT", 00:35:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "$NVMF_PORT", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:00.569 "hdgst": ${hdgst:-false}, 00:35:00.569 "ddgst": ${ddgst:-false} 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 } 00:35:00.569 EOF 00:35:00.569 )") 00:35:00.569 16:33:59 -- target/dif.sh@82 -- # gen_fio_conf 00:35:00.569 16:33:59 -- target/dif.sh@54 -- # local file 00:35:00.569 16:33:59 -- target/dif.sh@56 -- # cat 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # cat 00:35:00.569 16:33:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:00.569 16:33:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:00.569 16:33:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file <= files )) 00:35:00.569 16:33:59 -- target/dif.sh@73 -- # cat 00:35:00.569 16:33:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:00.569 { 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme$subsystem", 00:35:00.569 "trtype": "$TEST_TRANSPORT", 00:35:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "$NVMF_PORT", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:00.569 "hdgst": ${hdgst:-false}, 00:35:00.569 "ddgst": ${ddgst:-false} 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 } 00:35:00.569 EOF 00:35:00.569 )") 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # cat 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file++ )) 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file <= files )) 00:35:00.569 16:33:59 -- target/dif.sh@73 -- # cat 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file++ )) 00:35:00.569 16:33:59 -- target/dif.sh@72 -- # (( file <= files )) 00:35:00.569 16:33:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:00.569 { 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme$subsystem", 00:35:00.569 "trtype": "$TEST_TRANSPORT", 00:35:00.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "$NVMF_PORT", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:00.569 "hdgst": ${hdgst:-false}, 00:35:00.569 "ddgst": ${ddgst:-false} 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 } 00:35:00.569 EOF 00:35:00.569 )") 00:35:00.569 16:33:59 -- nvmf/common.sh@542 -- # cat 00:35:00.569 16:33:59 -- nvmf/common.sh@544 -- # jq . 00:35:00.569 16:33:59 -- nvmf/common.sh@545 -- # IFS=, 00:35:00.569 16:33:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme0", 00:35:00.569 "trtype": "tcp", 00:35:00.569 "traddr": "10.0.0.2", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "4420", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:00.569 "hdgst": false, 00:35:00.569 "ddgst": false 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 },{ 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme1", 00:35:00.569 "trtype": "tcp", 00:35:00.569 "traddr": "10.0.0.2", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "4420", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:00.569 "hdgst": false, 00:35:00.569 "ddgst": false 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 },{ 00:35:00.569 "params": { 00:35:00.569 "name": "Nvme2", 00:35:00.569 "trtype": "tcp", 00:35:00.569 "traddr": "10.0.0.2", 00:35:00.569 "adrfam": "ipv4", 00:35:00.569 "trsvcid": "4420", 00:35:00.569 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:00.569 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:00.569 "hdgst": false, 00:35:00.569 "ddgst": false 00:35:00.569 }, 00:35:00.569 "method": "bdev_nvme_attach_controller" 00:35:00.569 }' 00:35:00.569 16:33:59 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:00.569 16:33:59 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:00.569 16:33:59 -- common/autotest_common.sh@1326 -- # break 00:35:00.569 16:33:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:00.569 16:33:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.147 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.147 ... 00:35:01.147 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.147 ... 00:35:01.147 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:01.147 ... 00:35:01.147 fio-3.35 00:35:01.147 Starting 24 threads 00:35:01.147 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.085 [2024-04-23 16:34:00.852412] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:02.085 [2024-04-23 16:34:00.852483] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:14.299 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341046: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=517, BW=2069KiB/s (2119kB/s)(20.2MiB/10021msec) 00:35:14.299 slat (usec): min=4, max=121, avg=19.03, stdev=17.70 00:35:14.299 clat (usec): min=21284, max=42881, avg=30793.99, stdev=1429.62 00:35:14.299 lat (usec): min=21288, max=42889, avg=30813.03, stdev=1429.42 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[25035], 5.00th=[29754], 10.00th=[30278], 20.00th=[30540], 00:35:14.299 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.299 | 99.00th=[36439], 99.50th=[39584], 99.90th=[40109], 99.95th=[41681], 00:35:14.299 | 99.99th=[42730] 00:35:14.299 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2067.20, stdev=42.36, samples=20 00:35:14.299 iops : min= 512, max= 544, avg=516.80, stdev=10.59, samples=20 00:35:14.299 lat (msec) : 50=100.00% 00:35:14.299 cpu : usr=98.47%, sys=1.12%, ctx=20, majf=0, minf=1632 00:35:14.299 IO depths : 1=5.7%, 2=11.5%, 4=24.1%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341048: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:35:14.299 slat (usec): min=4, max=156, avg=55.25, stdev=35.26 00:35:14.299 clat (usec): min=15749, max=50271, avg=30321.27, stdev=1543.61 00:35:14.299 lat (usec): min=15826, max=50296, avg=30376.52, stdev=1545.36 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[28705], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:35:14.299 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:14.299 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[31065], 00:35:14.299 | 99.00th=[31589], 99.50th=[32113], 99.90th=[50070], 99.95th=[50070], 00:35:14.299 | 99.99th=[50070] 00:35:14.299 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2067.35, stdev=62.27, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=516.80, stdev=15.66, samples=20 00:35:14.299 lat (msec) : 20=0.31%, 50=99.38%, 100=0.31% 00:35:14.299 cpu : usr=98.46%, sys=0.90%, ctx=225, majf=0, minf=1634 00:35:14.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341049: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10013msec) 00:35:14.299 slat (usec): min=5, max=139, avg=46.35, stdev=19.31 00:35:14.299 clat (usec): min=15894, max=50360, avg=30462.45, stdev=1544.99 00:35:14.299 lat (usec): min=15945, max=50382, avg=30508.81, stdev=1545.40 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:14.299 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.299 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:14.299 | 99.00th=[31589], 99.50th=[32113], 99.90th=[50070], 99.95th=[50594], 00:35:14.299 | 99.99th=[50594] 00:35:14.299 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2067.35, stdev=62.27, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=516.80, stdev=15.66, samples=20 00:35:14.299 lat (msec) : 20=0.31%, 50=99.38%, 100=0.31% 00:35:14.299 cpu : usr=96.19%, sys=1.99%, ctx=53, majf=0, minf=1633 00:35:14.299 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341050: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.1MiB/10012msec) 00:35:14.299 slat (usec): min=3, max=125, avg=27.96, stdev=23.21 00:35:14.299 clat (usec): min=12068, max=61445, avg=30911.18, stdev=3914.90 00:35:14.299 lat (usec): min=12089, max=61466, avg=30939.14, stdev=3915.31 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[16057], 5.00th=[26870], 10.00th=[29754], 20.00th=[30278], 00:35:14.299 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[34866], 00:35:14.299 | 99.00th=[47449], 99.50th=[51643], 99.90th=[61604], 99.95th=[61604], 00:35:14.299 | 99.99th=[61604] 00:35:14.299 bw ( KiB/s): min= 1971, max= 2176, per=4.14%, avg=2052.95, stdev=38.54, samples=20 00:35:14.299 iops : min= 492, max= 544, avg=513.20, stdev= 9.72, samples=20 00:35:14.299 lat (msec) : 20=1.52%, 50=97.94%, 100=0.54% 00:35:14.299 cpu : usr=98.85%, sys=0.70%, ctx=51, majf=0, minf=1637 00:35:14.299 IO depths : 1=2.3%, 2=6.8%, 4=21.2%, 8=59.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341051: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=516, BW=2066KiB/s (2116kB/s)(20.2MiB/10004msec) 00:35:14.299 slat (nsec): min=4075, max=91537, avg=11684.44, stdev=6045.89 00:35:14.299 clat (usec): min=19517, max=51990, avg=30862.73, stdev=1656.71 00:35:14.299 lat (usec): min=19550, max=52012, avg=30874.41, stdev=1655.65 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[24249], 5.00th=[30016], 10.00th=[30540], 20.00th=[30540], 00:35:14.299 | 30.00th=[30802], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.299 | 99.00th=[37487], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:35:14.299 | 99.99th=[52167] 00:35:14.299 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2061.47, stdev=58.97, samples=19 00:35:14.299 iops : min= 480, max= 544, avg=515.37, stdev=14.74, samples=19 00:35:14.299 lat (msec) : 20=0.15%, 50=99.67%, 100=0.17% 00:35:14.299 cpu : usr=95.09%, sys=2.17%, ctx=59, majf=0, minf=1635 00:35:14.299 IO depths : 1=5.4%, 2=11.6%, 4=24.8%, 8=51.2%, 16=7.2%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341052: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=515, BW=2063KiB/s (2113kB/s)(20.2MiB/10019msec) 00:35:14.299 slat (usec): min=3, max=120, avg=12.08, stdev= 7.85 00:35:14.299 clat (usec): min=19141, max=62329, avg=30918.00, stdev=3276.12 00:35:14.299 lat (usec): min=19150, max=62350, avg=30930.08, stdev=3276.08 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[20055], 5.00th=[27132], 10.00th=[30278], 20.00th=[30540], 00:35:14.299 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[35390], 00:35:14.299 | 99.00th=[42206], 99.50th=[42730], 99.90th=[62129], 99.95th=[62129], 00:35:14.299 | 99.99th=[62129] 00:35:14.299 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2060.80, stdev=57.24, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=515.20, stdev=14.31, samples=20 00:35:14.299 lat (msec) : 20=0.99%, 50=98.70%, 100=0.31% 00:35:14.299 cpu : usr=96.85%, sys=1.65%, ctx=106, majf=0, minf=1640 00:35:14.299 IO depths : 1=4.7%, 2=10.4%, 4=23.0%, 8=54.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341054: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=515, BW=2061KiB/s (2111kB/s)(20.1MiB/10010msec) 00:35:14.299 slat (usec): min=7, max=139, avg=34.82, stdev=26.25 00:35:14.299 clat (usec): min=15863, max=58664, avg=30774.95, stdev=2350.97 00:35:14.299 lat (usec): min=15885, max=58723, avg=30809.76, stdev=2348.57 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[25297], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:35:14.299 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.299 | 99.00th=[36963], 99.50th=[47973], 99.90th=[58459], 99.95th=[58459], 00:35:14.299 | 99.99th=[58459] 00:35:14.299 bw ( KiB/s): min= 1840, max= 2176, per=4.16%, avg=2061.60, stdev=68.92, samples=20 00:35:14.299 iops : min= 460, max= 544, avg=515.40, stdev=17.23, samples=20 00:35:14.299 lat (msec) : 20=0.23%, 50=99.38%, 100=0.39% 00:35:14.299 cpu : usr=98.46%, sys=1.01%, ctx=136, majf=0, minf=1634 00:35:14.299 IO depths : 1=2.5%, 2=5.1%, 4=13.9%, 8=66.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=92.1%, 8=4.3%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename0: (groupid=0, jobs=1): err= 0: pid=3341055: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:35:14.299 slat (usec): min=5, max=149, avg=36.73, stdev=24.70 00:35:14.299 clat (usec): min=13484, max=55994, avg=30627.55, stdev=1520.53 00:35:14.299 lat (usec): min=13489, max=56020, avg=30664.28, stdev=1520.78 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[26346], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:35:14.299 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31065], 95.00th=[31327], 00:35:14.299 | 99.00th=[34866], 99.50th=[36439], 99.90th=[43254], 99.95th=[43779], 00:35:14.299 | 99.99th=[55837] 00:35:14.299 bw ( KiB/s): min= 2036, max= 2176, per=4.17%, avg=2067.40, stdev=44.98, samples=20 00:35:14.299 iops : min= 509, max= 544, avg=516.85, stdev=11.24, samples=20 00:35:14.299 lat (msec) : 20=0.31%, 50=99.65%, 100=0.04% 00:35:14.299 cpu : usr=98.93%, sys=0.63%, ctx=29, majf=0, minf=1636 00:35:14.299 IO depths : 1=3.5%, 2=9.6%, 4=24.8%, 8=53.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename1: (groupid=0, jobs=1): err= 0: pid=3341056: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10010msec) 00:35:14.299 slat (usec): min=6, max=146, avg=45.55, stdev=21.09 00:35:14.299 clat (usec): min=28596, max=53081, avg=30621.37, stdev=1327.80 00:35:14.299 lat (usec): min=28628, max=53111, avg=30666.93, stdev=1324.10 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:35:14.299 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:14.299 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:35:14.299 | 99.00th=[31589], 99.50th=[32113], 99.90th=[53216], 99.95th=[53216], 00:35:14.299 | 99.99th=[53216] 00:35:14.299 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2060.95, stdev=56.86, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=515.20, stdev=14.31, samples=20 00:35:14.299 lat (msec) : 50=99.69%, 100=0.31% 00:35:14.299 cpu : usr=95.14%, sys=2.39%, ctx=50, majf=0, minf=1635 00:35:14.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename1: (groupid=0, jobs=1): err= 0: pid=3341057: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=516, BW=2066KiB/s (2115kB/s)(20.2MiB/10018msec) 00:35:14.299 slat (usec): min=3, max=211, avg=16.58, stdev=11.05 00:35:14.299 clat (usec): min=10455, max=57602, avg=30864.52, stdev=2619.43 00:35:14.299 lat (usec): min=10471, max=57623, avg=30881.10, stdev=2619.73 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[19530], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:35:14.299 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.299 | 99.00th=[46400], 99.50th=[48497], 99.90th=[54789], 99.95th=[57410], 00:35:14.299 | 99.99th=[57410] 00:35:14.299 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2063.35, stdev=60.72, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=515.80, stdev=15.27, samples=20 00:35:14.299 lat (msec) : 20=1.04%, 50=98.74%, 100=0.21% 00:35:14.299 cpu : usr=98.66%, sys=0.93%, ctx=18, majf=0, minf=1636 00:35:14.299 IO depths : 1=1.8%, 2=5.4%, 4=15.2%, 8=64.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=92.3%, 8=4.2%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename1: (groupid=0, jobs=1): err= 0: pid=3341058: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=517, BW=2069KiB/s (2119kB/s)(20.2MiB/10022msec) 00:35:14.299 slat (usec): min=4, max=167, avg=16.20, stdev=10.17 00:35:14.299 clat (usec): min=10451, max=60788, avg=30804.02, stdev=2055.68 00:35:14.299 lat (usec): min=10460, max=60820, avg=30820.22, stdev=2055.03 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[28705], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:35:14.299 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.299 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:35:14.299 | 99.00th=[32113], 99.50th=[32375], 99.90th=[60556], 99.95th=[60556], 00:35:14.299 | 99.99th=[60556] 00:35:14.299 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2067.20, stdev=62.64, samples=20 00:35:14.299 iops : min= 480, max= 544, avg=516.80, stdev=15.66, samples=20 00:35:14.299 lat (msec) : 20=0.46%, 50=99.19%, 100=0.35% 00:35:14.299 cpu : usr=95.18%, sys=2.34%, ctx=80, majf=0, minf=1636 00:35:14.299 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.299 filename1: (groupid=0, jobs=1): err= 0: pid=3341060: Tue Apr 23 16:34:11 2024 00:35:14.299 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10006msec) 00:35:14.299 slat (usec): min=7, max=153, avg=45.40, stdev=23.14 00:35:14.299 clat (usec): min=17800, max=48993, avg=30450.66, stdev=2081.64 00:35:14.299 lat (usec): min=17808, max=49045, avg=30496.06, stdev=2082.59 00:35:14.299 clat percentiles (usec): 00:35:14.299 | 1.00th=[19006], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:35:14.299 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.299 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:35:14.299 | 99.00th=[35390], 99.50th=[44827], 99.90th=[49021], 99.95th=[49021], 00:35:14.299 | 99.99th=[49021] 00:35:14.299 bw ( KiB/s): min= 1920, max= 2352, per=4.18%, avg=2070.74, stdev=85.55, samples=19 00:35:14.299 iops : min= 480, max= 588, avg=517.68, stdev=21.39, samples=19 00:35:14.299 lat (msec) : 20=1.08%, 50=98.92% 00:35:14.299 cpu : usr=98.90%, sys=0.61%, ctx=58, majf=0, minf=1635 00:35:14.299 IO depths : 1=4.7%, 2=10.8%, 4=24.4%, 8=52.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:14.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.299 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename1: (groupid=0, jobs=1): err= 0: pid=3341061: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10015msec) 00:35:14.300 slat (usec): min=3, max=167, avg=39.45, stdev=22.89 00:35:14.300 clat (usec): min=15954, max=51478, avg=30611.51, stdev=1675.69 00:35:14.300 lat (usec): min=15958, max=51500, avg=30650.95, stdev=1676.98 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[25297], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:35:14.300 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:14.300 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.300 | 99.00th=[35390], 99.50th=[36439], 99.90th=[47973], 99.95th=[51643], 00:35:14.300 | 99.99th=[51643] 00:35:14.300 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2067.20, stdev=46.89, samples=20 00:35:14.300 iops : min= 512, max= 544, avg=516.80, stdev=11.72, samples=20 00:35:14.300 lat (msec) : 20=0.25%, 50=99.69%, 100=0.06% 00:35:14.300 cpu : usr=98.70%, sys=0.85%, ctx=67, majf=0, minf=1635 00:35:14.300 IO depths : 1=4.4%, 2=10.2%, 4=23.5%, 8=53.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename1: (groupid=0, jobs=1): err= 0: pid=3341062: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2070KiB/s (2119kB/s)(20.2MiB/10011msec) 00:35:14.300 slat (usec): min=4, max=153, avg=48.58, stdev=23.37 00:35:14.300 clat (usec): min=15611, max=50886, avg=30455.44, stdev=1951.31 00:35:14.300 lat (usec): min=15671, max=50896, avg=30504.03, stdev=1950.97 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[21627], 5.00th=[29492], 10.00th=[30016], 20.00th=[30016], 00:35:14.300 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.300 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:35:14.300 | 99.00th=[38011], 99.50th=[44827], 99.90th=[50594], 99.95th=[51119], 00:35:14.300 | 99.99th=[51119] 00:35:14.300 bw ( KiB/s): min= 1916, max= 2176, per=4.17%, avg=2065.40, stdev=71.78, samples=20 00:35:14.300 iops : min= 479, max= 544, avg=516.35, stdev=17.95, samples=20 00:35:14.300 lat (msec) : 20=0.46%, 50=99.42%, 100=0.12% 00:35:14.300 cpu : usr=99.04%, sys=0.55%, ctx=13, majf=0, minf=1636 00:35:14.300 IO depths : 1=5.8%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename1: (groupid=0, jobs=1): err= 0: pid=3341063: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:35:14.300 slat (usec): min=5, max=156, avg=50.56, stdev=22.10 00:35:14.300 clat (usec): min=15971, max=51868, avg=30441.07, stdev=1610.21 00:35:14.300 lat (usec): min=15982, max=51894, avg=30491.63, stdev=1610.54 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:14.300 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.300 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:35:14.300 | 99.00th=[31589], 99.50th=[32113], 99.90th=[51643], 99.95th=[51643], 00:35:14.300 | 99.99th=[51643] 00:35:14.300 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2067.20, stdev=62.64, samples=20 00:35:14.300 iops : min= 480, max= 544, avg=516.80, stdev=15.66, samples=20 00:35:14.300 lat (msec) : 20=0.31%, 50=99.38%, 100=0.31% 00:35:14.300 cpu : usr=98.71%, sys=0.78%, ctx=62, majf=0, minf=1634 00:35:14.300 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename1: (groupid=0, jobs=1): err= 0: pid=3341064: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10011msec) 00:35:14.300 slat (usec): min=5, max=219, avg=50.46, stdev=22.11 00:35:14.300 clat (usec): min=15748, max=47033, avg=30414.50, stdev=1455.95 00:35:14.300 lat (usec): min=15782, max=47062, avg=30464.96, stdev=1455.73 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:14.300 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.300 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:14.300 | 99.00th=[31589], 99.50th=[32113], 99.90th=[46924], 99.95th=[46924], 00:35:14.300 | 99.99th=[46924] 00:35:14.300 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=2067.20, stdev=75.15, samples=20 00:35:14.300 iops : min= 480, max= 544, avg=516.80, stdev=18.79, samples=20 00:35:14.300 lat (msec) : 20=0.31%, 50=99.69% 00:35:14.300 cpu : usr=92.61%, sys=3.38%, ctx=141, majf=0, minf=1636 00:35:14.300 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341066: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=511, BW=2046KiB/s (2095kB/s)(20.0MiB/10009msec) 00:35:14.300 slat (nsec): min=5398, max=96034, avg=18741.39, stdev=14037.31 00:35:14.300 clat (usec): min=9693, max=53324, avg=31155.29, stdev=4650.88 00:35:14.300 lat (usec): min=9703, max=53355, avg=31174.03, stdev=4650.88 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[11994], 5.00th=[29492], 10.00th=[30278], 20.00th=[30540], 00:35:14.300 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[35914], 00:35:14.300 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:35:14.300 | 99.99th=[53216] 00:35:14.300 bw ( KiB/s): min= 1888, max= 2224, per=4.12%, avg=2043.35, stdev=76.10, samples=20 00:35:14.300 iops : min= 472, max= 556, avg=510.80, stdev=19.09, samples=20 00:35:14.300 lat (msec) : 10=0.02%, 20=2.15%, 50=95.98%, 100=1.86% 00:35:14.300 cpu : usr=98.50%, sys=1.03%, ctx=48, majf=0, minf=1635 00:35:14.300 IO depths : 1=0.9%, 2=3.1%, 4=11.0%, 8=70.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=91.4%, 8=5.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341067: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=516, BW=2065KiB/s (2114kB/s)(20.2MiB/10011msec) 00:35:14.300 slat (usec): min=5, max=158, avg=42.01, stdev=25.95 00:35:14.300 clat (usec): min=15521, max=60635, avg=30615.05, stdev=2187.80 00:35:14.300 lat (usec): min=15530, max=60663, avg=30657.06, stdev=2185.89 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:14.300 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:35:14.300 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31327], 00:35:14.300 | 99.00th=[32113], 99.50th=[45876], 99.90th=[60556], 99.95th=[60556], 00:35:14.300 | 99.99th=[60556] 00:35:14.300 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2060.80, stdev=57.48, samples=20 00:35:14.300 iops : min= 480, max= 544, avg=515.20, stdev=14.37, samples=20 00:35:14.300 lat (msec) : 20=0.39%, 50=99.19%, 100=0.43% 00:35:14.300 cpu : usr=95.39%, sys=2.27%, ctx=68, majf=0, minf=1633 00:35:14.300 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341068: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=513, BW=2055KiB/s (2105kB/s)(20.1MiB/10007msec) 00:35:14.300 slat (usec): min=5, max=120, avg=22.98, stdev=19.81 00:35:14.300 clat (usec): min=11053, max=53141, avg=30973.42, stdev=3517.80 00:35:14.300 lat (usec): min=11065, max=53152, avg=30996.40, stdev=3516.32 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[20841], 5.00th=[26870], 10.00th=[29492], 20.00th=[30278], 00:35:14.300 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31851], 95.00th=[35914], 00:35:14.300 | 99.00th=[48497], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:35:14.300 | 99.99th=[53216] 00:35:14.300 bw ( KiB/s): min= 1904, max= 2160, per=4.14%, avg=2050.53, stdev=59.21, samples=19 00:35:14.300 iops : min= 476, max= 540, avg=512.63, stdev=14.80, samples=19 00:35:14.300 lat (msec) : 20=0.70%, 50=98.66%, 100=0.64% 00:35:14.300 cpu : usr=98.95%, sys=0.65%, ctx=16, majf=0, minf=1634 00:35:14.300 IO depths : 1=0.3%, 2=4.0%, 4=16.1%, 8=65.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=92.3%, 8=3.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341069: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10012msec) 00:35:14.300 slat (usec): min=5, max=269, avg=31.02, stdev=23.77 00:35:14.300 clat (usec): min=12387, max=47850, avg=30694.35, stdev=1266.41 00:35:14.300 lat (usec): min=12393, max=47866, avg=30725.37, stdev=1264.58 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:35:14.300 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:35:14.300 | 99.00th=[31851], 99.50th=[32113], 99.90th=[43254], 99.95th=[43254], 00:35:14.300 | 99.99th=[47973] 00:35:14.300 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2067.20, stdev=46.89, samples=20 00:35:14.300 iops : min= 512, max= 544, avg=516.80, stdev=11.72, samples=20 00:35:14.300 lat (msec) : 20=0.31%, 50=99.69% 00:35:14.300 cpu : usr=99.00%, sys=0.53%, ctx=77, majf=0, minf=1636 00:35:14.300 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341071: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10014msec) 00:35:14.300 slat (usec): min=4, max=153, avg=50.26, stdev=21.67 00:35:14.300 clat (usec): min=15908, max=50318, avg=30443.10, stdev=1529.97 00:35:14.300 lat (usec): min=15968, max=50340, avg=30493.37, stdev=1528.80 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:35:14.300 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:14.300 | 70.00th=[30802], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:35:14.300 | 99.00th=[31589], 99.50th=[32113], 99.90th=[50070], 99.95th=[50070], 00:35:14.300 | 99.99th=[50070] 00:35:14.300 bw ( KiB/s): min= 1923, max= 2176, per=4.17%, avg=2067.35, stdev=62.27, samples=20 00:35:14.300 iops : min= 480, max= 544, avg=516.80, stdev=15.66, samples=20 00:35:14.300 lat (msec) : 20=0.31%, 50=99.38%, 100=0.31% 00:35:14.300 cpu : usr=95.15%, sys=2.44%, ctx=41, majf=0, minf=1636 00:35:14.300 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341072: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.2MiB/10010msec) 00:35:14.300 slat (nsec): min=4151, max=84836, avg=19662.16, stdev=10453.06 00:35:14.300 clat (usec): min=15577, max=55046, avg=30731.44, stdev=1687.24 00:35:14.300 lat (usec): min=15614, max=55060, avg=30751.10, stdev=1685.85 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[28443], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:35:14.300 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:35:14.300 | 99.00th=[32113], 99.50th=[32637], 99.90th=[52167], 99.95th=[52167], 00:35:14.300 | 99.99th=[54789] 00:35:14.300 bw ( KiB/s): min= 1920, max= 2192, per=4.17%, avg=2067.20, stdev=63.07, samples=20 00:35:14.300 iops : min= 480, max= 548, avg=516.80, stdev=15.77, samples=20 00:35:14.300 lat (msec) : 20=0.35%, 50=99.31%, 100=0.35% 00:35:14.300 cpu : usr=98.71%, sys=0.79%, ctx=66, majf=0, minf=1636 00:35:14.300 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341073: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=515, BW=2060KiB/s (2110kB/s)(20.1MiB/10010msec) 00:35:14.300 slat (usec): min=4, max=120, avg=29.14, stdev=21.79 00:35:14.300 clat (usec): min=10598, max=55221, avg=30863.49, stdev=3885.31 00:35:14.300 lat (usec): min=10613, max=55229, avg=30892.62, stdev=3885.46 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[16450], 5.00th=[25822], 10.00th=[29754], 20.00th=[30278], 00:35:14.300 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31851], 95.00th=[35914], 00:35:14.300 | 99.00th=[46924], 99.50th=[49546], 99.90th=[55313], 99.95th=[55313], 00:35:14.300 | 99.99th=[55313] 00:35:14.300 bw ( KiB/s): min= 1840, max= 2176, per=4.15%, avg=2056.00, stdev=60.65, samples=20 00:35:14.300 iops : min= 460, max= 544, avg=514.00, stdev=15.16, samples=20 00:35:14.300 lat (msec) : 20=1.76%, 50=97.77%, 100=0.47% 00:35:14.300 cpu : usr=98.79%, sys=0.78%, ctx=31, majf=0, minf=1634 00:35:14.300 IO depths : 1=0.8%, 2=4.0%, 4=15.7%, 8=66.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=92.4%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 filename2: (groupid=0, jobs=1): err= 0: pid=3341075: Tue Apr 23 16:34:11 2024 00:35:14.300 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10013msec) 00:35:14.300 slat (usec): min=4, max=131, avg=20.42, stdev=12.99 00:35:14.300 clat (usec): min=10265, max=54350, avg=30739.29, stdev=3161.38 00:35:14.300 lat (usec): min=10274, max=54360, avg=30759.72, stdev=3161.20 00:35:14.300 clat percentiles (usec): 00:35:14.300 | 1.00th=[16450], 5.00th=[29754], 10.00th=[30278], 20.00th=[30540], 00:35:14.300 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[30802], 00:35:14.300 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:35:14.300 | 99.00th=[48497], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:35:14.300 | 99.99th=[54264] 00:35:14.300 bw ( KiB/s): min= 1920, max= 2304, per=4.17%, avg=2067.20, stdev=83.80, samples=20 00:35:14.300 iops : min= 480, max= 576, avg=516.80, stdev=20.95, samples=20 00:35:14.300 lat (msec) : 20=1.68%, 50=97.78%, 100=0.54% 00:35:14.300 cpu : usr=95.74%, sys=2.08%, ctx=84, majf=0, minf=1634 00:35:14.300 IO depths : 1=2.5%, 2=7.8%, 4=21.4%, 8=57.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:35:14.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.300 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:14.300 00:35:14.300 Run status group 0 (all jobs): 00:35:14.300 READ: bw=48.4MiB/s (50.7MB/s), 2046KiB/s-2075KiB/s (2095kB/s-2125kB/s), io=485MiB (509MB), run=10004-10022msec 00:35:14.300 ----------------------------------------------------- 00:35:14.300 Suppressions used: 00:35:14.300 count bytes template 00:35:14.300 45 402 /usr/src/fio/parse.c 00:35:14.300 1 8 libtcmalloc_minimal.so 00:35:14.300 1 904 libcrypto.so 00:35:14.300 ----------------------------------------------------- 00:35:14.300 00:35:14.300 16:34:11 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:14.300 16:34:11 -- target/dif.sh@43 -- # local sub 00:35:14.300 16:34:11 -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.300 16:34:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.300 16:34:11 -- target/dif.sh@36 -- # local sub_id=0 00:35:14.300 16:34:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.300 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.300 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.300 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.300 16:34:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.300 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.300 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.300 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.300 16:34:11 -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.301 16:34:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:14.301 16:34:11 -- target/dif.sh@36 -- # local sub_id=1 00:35:14.301 16:34:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.301 16:34:11 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:14.301 16:34:11 -- target/dif.sh@36 -- # local sub_id=2 00:35:14.301 16:34:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # numjobs=2 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # iodepth=8 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # runtime=5 00:35:14.301 16:34:11 -- target/dif.sh@115 -- # files=1 00:35:14.301 16:34:11 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:14.301 16:34:11 -- target/dif.sh@28 -- # local sub 00:35:14.301 16:34:11 -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.301 16:34:11 -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.301 16:34:11 -- target/dif.sh@18 -- # local sub_id=0 00:35:14.301 16:34:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 bdev_null0 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 [2024-04-23 16:34:11.604655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.301 16:34:11 -- target/dif.sh@31 -- # create_subsystem 1 00:35:14.301 16:34:11 -- target/dif.sh@18 -- # local sub_id=1 00:35:14.301 16:34:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 bdev_null1 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.301 16:34:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:14.301 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:35:14.301 16:34:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:14.301 16:34:11 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:14.301 16:34:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.301 16:34:11 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:14.301 16:34:11 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.301 16:34:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:14.301 16:34:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:14.301 16:34:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.301 16:34:11 -- nvmf/common.sh@520 -- # config=() 00:35:14.301 16:34:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:14.301 16:34:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.301 16:34:11 -- nvmf/common.sh@520 -- # local subsystem config 00:35:14.301 16:34:11 -- common/autotest_common.sh@1320 -- # shift 00:35:14.301 16:34:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:14.301 16:34:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:14.301 16:34:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.301 16:34:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:14.301 { 00:35:14.301 "params": { 00:35:14.301 "name": "Nvme$subsystem", 00:35:14.301 "trtype": "$TEST_TRANSPORT", 00:35:14.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.301 "adrfam": "ipv4", 00:35:14.301 "trsvcid": "$NVMF_PORT", 00:35:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.301 "hdgst": ${hdgst:-false}, 00:35:14.301 "ddgst": ${ddgst:-false} 00:35:14.301 }, 00:35:14.301 "method": "bdev_nvme_attach_controller" 00:35:14.301 } 00:35:14.301 EOF 00:35:14.301 )") 00:35:14.301 16:34:11 -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.301 16:34:11 -- target/dif.sh@54 -- # local file 00:35:14.301 16:34:11 -- target/dif.sh@56 -- # cat 00:35:14.301 16:34:11 -- nvmf/common.sh@542 -- # cat 00:35:14.301 16:34:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.301 16:34:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:14.301 16:34:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:14.301 16:34:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.301 16:34:11 -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.301 16:34:11 -- target/dif.sh@73 -- # cat 00:35:14.301 16:34:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:14.301 16:34:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:14.301 { 00:35:14.301 "params": { 00:35:14.301 "name": "Nvme$subsystem", 00:35:14.301 "trtype": "$TEST_TRANSPORT", 00:35:14.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.301 "adrfam": "ipv4", 00:35:14.301 "trsvcid": "$NVMF_PORT", 00:35:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.301 "hdgst": ${hdgst:-false}, 00:35:14.301 "ddgst": ${ddgst:-false} 00:35:14.301 }, 00:35:14.301 "method": "bdev_nvme_attach_controller" 00:35:14.301 } 00:35:14.301 EOF 00:35:14.301 )") 00:35:14.301 16:34:11 -- nvmf/common.sh@542 -- # cat 00:35:14.301 16:34:11 -- target/dif.sh@72 -- # (( file++ )) 00:35:14.301 16:34:11 -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.301 16:34:11 -- nvmf/common.sh@544 -- # jq . 00:35:14.301 16:34:11 -- nvmf/common.sh@545 -- # IFS=, 00:35:14.301 16:34:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:14.301 "params": { 00:35:14.301 "name": "Nvme0", 00:35:14.301 "trtype": "tcp", 00:35:14.301 "traddr": "10.0.0.2", 00:35:14.301 "adrfam": "ipv4", 00:35:14.301 "trsvcid": "4420", 00:35:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.301 "hdgst": false, 00:35:14.301 "ddgst": false 00:35:14.301 }, 00:35:14.301 "method": "bdev_nvme_attach_controller" 00:35:14.301 },{ 00:35:14.301 "params": { 00:35:14.301 "name": "Nvme1", 00:35:14.301 "trtype": "tcp", 00:35:14.301 "traddr": "10.0.0.2", 00:35:14.301 "adrfam": "ipv4", 00:35:14.301 "trsvcid": "4420", 00:35:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:14.301 "hdgst": false, 00:35:14.301 "ddgst": false 00:35:14.301 }, 00:35:14.301 "method": "bdev_nvme_attach_controller" 00:35:14.301 }' 00:35:14.301 16:34:11 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:14.301 16:34:11 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:14.301 16:34:11 -- common/autotest_common.sh@1326 -- # break 00:35:14.301 16:34:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:14.301 16:34:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.301 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:14.301 ... 00:35:14.301 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:14.301 ... 00:35:14.301 fio-3.35 00:35:14.301 Starting 4 threads 00:35:14.301 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.301 [2024-04-23 16:34:12.962444] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:14.301 [2024-04-23 16:34:12.962511] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:19.573 00:35:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=3343519: Tue Apr 23 16:34:18 2024 00:35:19.573 read: IOPS=2758, BW=21.5MiB/s (22.6MB/s)(108MiB/5002msec) 00:35:19.573 slat (nsec): min=3837, max=63762, avg=8512.87, stdev=3231.44 00:35:19.573 clat (usec): min=938, max=13158, avg=2878.15, stdev=589.07 00:35:19.573 lat (usec): min=945, max=13182, avg=2886.66, stdev=589.20 00:35:19.573 clat percentiles (usec): 00:35:19.573 | 1.00th=[ 1516], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2507], 00:35:19.573 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2900], 00:35:19.573 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3589], 95.00th=[ 4113], 00:35:19.573 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[12911], 00:35:19.573 | 99.99th=[13173] 00:35:19.573 bw ( KiB/s): min=21216, max=24192, per=26.34%, avg=22073.00, stdev=1066.04, samples=10 00:35:19.573 iops : min= 2652, max= 3024, avg=2759.10, stdev=133.27, samples=10 00:35:19.573 lat (usec) : 1000=0.01% 00:35:19.573 lat (msec) : 2=2.46%, 4=91.13%, 10=6.34%, 20=0.06% 00:35:19.573 cpu : usr=97.34%, sys=2.12%, ctx=180, majf=0, minf=1635 00:35:19.573 IO depths : 1=0.1%, 2=0.6%, 4=69.9%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 issued rwts: total=13796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.573 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.573 filename0: (groupid=0, jobs=1): err= 0: pid=3343520: Tue Apr 23 16:34:18 2024 00:35:19.573 read: IOPS=2551, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5002msec) 00:35:19.573 slat (nsec): min=3461, max=57698, avg=8305.56, stdev=3151.19 00:35:19.573 clat (usec): min=1389, max=53772, avg=3112.93, stdev=1372.99 00:35:19.573 lat (usec): min=1396, max=53789, avg=3121.23, stdev=1372.93 00:35:19.573 clat percentiles (usec): 00:35:19.573 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2737], 00:35:19.573 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:35:19.573 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 4015], 95.00th=[ 4293], 00:35:19.573 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[53740], 00:35:19.573 | 99.99th=[53740] 00:35:19.573 bw ( KiB/s): min=18352, max=20976, per=24.31%, avg=20373.33, stdev=799.00, samples=9 00:35:19.573 iops : min= 2294, max= 2622, avg=2546.67, stdev=99.87, samples=9 00:35:19.573 lat (msec) : 2=0.09%, 4=89.81%, 10=10.04%, 100=0.06% 00:35:19.573 cpu : usr=98.12%, sys=1.56%, ctx=10, majf=0, minf=1638 00:35:19.573 IO depths : 1=0.1%, 2=0.1%, 4=71.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 issued rwts: total=12764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.573 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.573 filename1: (groupid=0, jobs=1): err= 0: pid=3343521: Tue Apr 23 16:34:18 2024 00:35:19.573 read: IOPS=2662, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:35:19.573 slat (nsec): min=3566, max=66495, avg=8332.46, stdev=3261.01 00:35:19.573 clat (usec): min=1247, max=13227, avg=2982.76, stdev=592.74 00:35:19.573 lat (usec): min=1253, max=13247, avg=2991.09, stdev=592.76 00:35:19.573 clat percentiles (usec): 00:35:19.573 | 1.00th=[ 1516], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2638], 00:35:19.573 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2966], 00:35:19.573 | 70.00th=[ 3064], 80.00th=[ 3294], 90.00th=[ 3752], 95.00th=[ 4146], 00:35:19.573 | 99.00th=[ 4621], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[12780], 00:35:19.573 | 99.99th=[13173] 00:35:19.573 bw ( KiB/s): min=20576, max=21851, per=25.42%, avg=21306.20, stdev=428.98, samples=10 00:35:19.573 iops : min= 2572, max= 2731, avg=2663.20, stdev=53.54, samples=10 00:35:19.573 lat (msec) : 2=1.96%, 4=90.83%, 10=7.16%, 20=0.06% 00:35:19.573 cpu : usr=98.04%, sys=1.66%, ctx=7, majf=0, minf=1635 00:35:19.573 IO depths : 1=0.1%, 2=0.5%, 4=70.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 issued rwts: total=13319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.573 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.573 filename1: (groupid=0, jobs=1): err= 0: pid=3343522: Tue Apr 23 16:34:18 2024 00:35:19.573 read: IOPS=2504, BW=19.6MiB/s (20.5MB/s)(97.8MiB/5001msec) 00:35:19.573 slat (nsec): min=3663, max=49690, avg=8846.65, stdev=3108.37 00:35:19.573 clat (usec): min=1271, max=53601, avg=3171.76, stdev=1376.98 00:35:19.573 lat (usec): min=1279, max=53626, avg=3180.61, stdev=1376.92 00:35:19.573 clat percentiles (usec): 00:35:19.573 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2737], 00:35:19.573 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3097], 00:35:19.573 | 70.00th=[ 3294], 80.00th=[ 3556], 90.00th=[ 3884], 95.00th=[ 4178], 00:35:19.573 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[53740], 00:35:19.573 | 99.99th=[53740] 00:35:19.573 bw ( KiB/s): min=18176, max=20912, per=23.88%, avg=20010.67, stdev=1018.30, samples=9 00:35:19.573 iops : min= 2272, max= 2614, avg=2501.33, stdev=127.29, samples=9 00:35:19.573 lat (msec) : 2=0.16%, 4=90.67%, 10=9.10%, 100=0.06% 00:35:19.573 cpu : usr=97.54%, sys=2.16%, ctx=7, majf=0, minf=1637 00:35:19.573 IO depths : 1=0.1%, 2=0.5%, 4=70.3%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.573 issued rwts: total=12523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.573 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:19.573 00:35:19.573 Run status group 0 (all jobs): 00:35:19.573 READ: bw=81.8MiB/s (85.8MB/s), 19.6MiB/s-21.5MiB/s (20.5MB/s-22.6MB/s), io=409MiB (429MB), run=5001-5002msec 00:35:20.147 ----------------------------------------------------- 00:35:20.147 Suppressions used: 00:35:20.147 count bytes template 00:35:20.147 6 52 /usr/src/fio/parse.c 00:35:20.147 1 8 libtcmalloc_minimal.so 00:35:20.147 1 904 libcrypto.so 00:35:20.147 ----------------------------------------------------- 00:35:20.147 00:35:20.147 16:34:18 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:20.147 16:34:18 -- target/dif.sh@43 -- # local sub 00:35:20.147 16:34:18 -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.147 16:34:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.147 16:34:18 -- target/dif.sh@36 -- # local sub_id=0 00:35:20.147 16:34:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.147 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.147 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.147 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.147 16:34:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.147 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.147 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.147 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.148 16:34:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:20.148 16:34:18 -- target/dif.sh@36 -- # local sub_id=1 00:35:20.148 16:34:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 00:35:20.148 real 0m26.267s 00:35:20.148 user 5m20.651s 00:35:20.148 sys 0m5.395s 00:35:20.148 16:34:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 ************************************ 00:35:20.148 END TEST fio_dif_rand_params 00:35:20.148 ************************************ 00:35:20.148 16:34:18 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:20.148 16:34:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:20.148 16:34:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 ************************************ 00:35:20.148 START TEST fio_dif_digest 00:35:20.148 ************************************ 00:35:20.148 16:34:18 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:20.148 16:34:18 -- target/dif.sh@123 -- # local NULL_DIF 00:35:20.148 16:34:18 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:20.148 16:34:18 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:20.148 16:34:18 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:20.148 16:34:18 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:20.148 16:34:18 -- target/dif.sh@127 -- # numjobs=3 00:35:20.148 16:34:18 -- target/dif.sh@127 -- # iodepth=3 00:35:20.148 16:34:18 -- target/dif.sh@127 -- # runtime=10 00:35:20.148 16:34:18 -- target/dif.sh@128 -- # hdgst=true 00:35:20.148 16:34:18 -- target/dif.sh@128 -- # ddgst=true 00:35:20.148 16:34:18 -- target/dif.sh@130 -- # create_subsystems 0 00:35:20.148 16:34:18 -- target/dif.sh@28 -- # local sub 00:35:20.148 16:34:18 -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.148 16:34:18 -- target/dif.sh@31 -- # create_subsystem 0 00:35:20.148 16:34:18 -- target/dif.sh@18 -- # local sub_id=0 00:35:20.148 16:34:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 bdev_null0 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.148 16:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:20.148 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.148 [2024-04-23 16:34:18.910328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.148 16:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:20.148 16:34:18 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:20.148 16:34:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.148 16:34:18 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.148 16:34:18 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:20.148 16:34:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:20.148 16:34:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.148 16:34:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:20.148 16:34:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:20.148 16:34:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.148 16:34:18 -- common/autotest_common.sh@1320 -- # shift 00:35:20.148 16:34:18 -- nvmf/common.sh@520 -- # config=() 00:35:20.148 16:34:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:20.148 16:34:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.148 16:34:18 -- nvmf/common.sh@520 -- # local subsystem config 00:35:20.148 16:34:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:20.148 16:34:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:20.148 { 00:35:20.148 "params": { 00:35:20.148 "name": "Nvme$subsystem", 00:35:20.148 "trtype": "$TEST_TRANSPORT", 00:35:20.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.148 "adrfam": "ipv4", 00:35:20.148 "trsvcid": "$NVMF_PORT", 00:35:20.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.148 "hdgst": ${hdgst:-false}, 00:35:20.148 "ddgst": ${ddgst:-false} 00:35:20.148 }, 00:35:20.148 "method": "bdev_nvme_attach_controller" 00:35:20.148 } 00:35:20.148 EOF 00:35:20.148 )") 00:35:20.148 16:34:18 -- target/dif.sh@82 -- # gen_fio_conf 00:35:20.148 16:34:18 -- target/dif.sh@54 -- # local file 00:35:20.148 16:34:18 -- target/dif.sh@56 -- # cat 00:35:20.148 16:34:18 -- nvmf/common.sh@542 -- # cat 00:35:20.148 16:34:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:20.148 16:34:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.148 16:34:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:20.148 16:34:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:20.148 16:34:18 -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.148 16:34:18 -- nvmf/common.sh@544 -- # jq . 00:35:20.148 16:34:18 -- nvmf/common.sh@545 -- # IFS=, 00:35:20.148 16:34:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:20.148 "params": { 00:35:20.148 "name": "Nvme0", 00:35:20.148 "trtype": "tcp", 00:35:20.148 "traddr": "10.0.0.2", 00:35:20.148 "adrfam": "ipv4", 00:35:20.148 "trsvcid": "4420", 00:35:20.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.148 "hdgst": true, 00:35:20.148 "ddgst": true 00:35:20.148 }, 00:35:20.148 "method": "bdev_nvme_attach_controller" 00:35:20.148 }' 00:35:20.148 16:34:18 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:20.148 16:34:18 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:20.148 16:34:18 -- common/autotest_common.sh@1326 -- # break 00:35:20.148 16:34:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:20.148 16:34:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.408 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:20.408 ... 00:35:20.408 fio-3.35 00:35:20.408 Starting 3 threads 00:35:20.668 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.926 [2024-04-23 16:34:19.771579] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:20.926 [2024-04-23 16:34:19.771644] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:33.133 00:35:33.133 filename0: (groupid=0, jobs=1): err= 0: pid=3345161: Tue Apr 23 16:34:29 2024 00:35:33.133 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10003msec) 00:35:33.133 slat (nsec): min=6255, max=24983, avg=8238.72, stdev=1721.99 00:35:33.133 clat (usec): min=4937, max=16784, avg=11199.85, stdev=1361.28 00:35:33.133 lat (usec): min=4945, max=16809, avg=11208.08, stdev=1361.23 00:35:33.133 clat percentiles (usec): 00:35:33.133 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:35:33.133 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:35:33.133 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13173], 95.00th=[13829], 00:35:33.133 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16581], 99.95th=[16581], 00:35:33.133 | 99.99th=[16909] 00:35:33.133 bw ( KiB/s): min=29696, max=38656, per=32.87%, avg=34411.79, stdev=2442.56, samples=19 00:35:33.133 iops : min= 232, max= 302, avg=268.84, stdev=19.08, samples=19 00:35:33.133 lat (msec) : 10=18.34%, 20=81.66% 00:35:33.133 cpu : usr=95.78%, sys=3.89%, ctx=14, majf=0, minf=1634 00:35:33.133 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.133 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:33.133 filename0: (groupid=0, jobs=1): err= 0: pid=3345162: Tue Apr 23 16:34:29 2024 00:35:33.133 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(352MiB/10048msec) 00:35:33.133 slat (nsec): min=6276, max=30842, avg=8074.99, stdev=1725.36 00:35:33.133 clat (usec): min=7645, max=55728, avg=10693.23, stdev=1704.77 00:35:33.133 lat (usec): min=7661, max=55736, avg=10701.31, stdev=1704.75 00:35:33.133 clat percentiles (usec): 00:35:33.133 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:35:33.133 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:35:33.133 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12649], 95.00th=[13173], 00:35:33.133 | 99.00th=[13960], 99.50th=[14353], 99.90th=[18482], 99.95th=[47973], 00:35:33.133 | 99.99th=[55837] 00:35:33.133 bw ( KiB/s): min=31232, max=39936, per=34.36%, avg=35968.00, stdev=2638.95, samples=20 00:35:33.133 iops : min= 244, max= 312, avg=281.00, stdev=20.62, samples=20 00:35:33.133 lat (msec) : 10=35.10%, 20=64.83%, 50=0.04%, 100=0.04% 00:35:33.133 cpu : usr=95.76%, sys=3.89%, ctx=16, majf=0, minf=1639 00:35:33.133 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 issued rwts: total=2812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.133 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:33.133 filename0: (groupid=0, jobs=1): err= 0: pid=3345163: Tue Apr 23 16:34:29 2024 00:35:33.133 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10045msec) 00:35:33.133 slat (nsec): min=6267, max=26255, avg=8273.12, stdev=1846.66 00:35:33.133 clat (usec): min=7624, max=51149, avg=11017.13, stdev=1688.09 00:35:33.133 lat (usec): min=7630, max=51156, avg=11025.41, stdev=1688.13 00:35:33.133 clat percentiles (usec): 00:35:33.133 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:35:33.133 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:35:33.133 | 70.00th=[11469], 80.00th=[12125], 90.00th=[13042], 95.00th=[13566], 00:35:33.133 | 99.00th=[14615], 99.50th=[15139], 99.90th=[19792], 99.95th=[45351], 00:35:33.133 | 99.99th=[51119] 00:35:33.133 bw ( KiB/s): min=31232, max=39424, per=33.35%, avg=34909.05, stdev=2505.45, samples=20 00:35:33.133 iops : min= 244, max= 308, avg=272.70, stdev=19.58, samples=20 00:35:33.133 lat (msec) : 10=24.81%, 20=75.12%, 50=0.04%, 100=0.04% 00:35:33.133 cpu : usr=95.77%, sys=3.88%, ctx=14, majf=0, minf=1632 00:35:33.133 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:33.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.133 issued rwts: total=2729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.133 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:33.133 00:35:33.133 Run status group 0 (all jobs): 00:35:33.133 READ: bw=102MiB/s (107MB/s), 33.5MiB/s-35.0MiB/s (35.1MB/s-36.7MB/s), io=1027MiB (1077MB), run=10003-10048msec 00:35:33.133 ----------------------------------------------------- 00:35:33.133 Suppressions used: 00:35:33.133 count bytes template 00:35:33.133 5 44 /usr/src/fio/parse.c 00:35:33.133 1 8 libtcmalloc_minimal.so 00:35:33.133 1 904 libcrypto.so 00:35:33.133 ----------------------------------------------------- 00:35:33.133 00:35:33.133 16:34:30 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:33.133 16:34:30 -- target/dif.sh@43 -- # local sub 00:35:33.133 16:34:30 -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.133 16:34:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:33.133 16:34:30 -- target/dif.sh@36 -- # local sub_id=0 00:35:33.133 16:34:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:33.133 16:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.133 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:35:33.133 16:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.133 16:34:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:33.133 16:34:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:33.133 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:35:33.133 16:34:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:33.133 00:35:33.133 real 0m11.750s 00:35:33.133 user 0m40.577s 00:35:33.133 sys 0m1.628s 00:35:33.133 16:34:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:33.133 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:35:33.133 ************************************ 00:35:33.133 END TEST fio_dif_digest 00:35:33.133 ************************************ 00:35:33.133 16:34:30 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:33.133 16:34:30 -- target/dif.sh@147 -- # nvmftestfini 00:35:33.133 16:34:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:33.133 16:34:30 -- nvmf/common.sh@116 -- # sync 00:35:33.133 16:34:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:33.133 16:34:30 -- nvmf/common.sh@119 -- # set +e 00:35:33.133 16:34:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:33.133 16:34:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:33.133 rmmod nvme_tcp 00:35:33.133 rmmod nvme_fabrics 00:35:33.133 rmmod nvme_keyring 00:35:33.133 16:34:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:33.133 16:34:30 -- nvmf/common.sh@123 -- # set -e 00:35:33.133 16:34:30 -- nvmf/common.sh@124 -- # return 0 00:35:33.133 16:34:30 -- nvmf/common.sh@477 -- # '[' -n 3333881 ']' 00:35:33.133 16:34:30 -- nvmf/common.sh@478 -- # killprocess 3333881 00:35:33.133 16:34:30 -- common/autotest_common.sh@926 -- # '[' -z 3333881 ']' 00:35:33.133 16:34:30 -- common/autotest_common.sh@930 -- # kill -0 3333881 00:35:33.133 16:34:30 -- common/autotest_common.sh@931 -- # uname 00:35:33.133 16:34:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:33.133 16:34:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3333881 00:35:33.133 16:34:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:33.133 16:34:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:33.133 16:34:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3333881' 00:35:33.133 killing process with pid 3333881 00:35:33.133 16:34:30 -- common/autotest_common.sh@945 -- # kill 3333881 00:35:33.133 16:34:30 -- common/autotest_common.sh@950 -- # wait 3333881 00:35:33.133 16:34:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:33.133 16:34:31 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:35:35.038 Waiting for block devices as requested 00:35:35.038 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:35:35.038 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.038 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.038 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.038 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.297 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.297 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.297 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.297 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.555 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.555 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.555 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.555 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.555 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.813 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.813 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:35.813 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:35:35.813 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:35:36.072 16:34:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:36.072 16:34:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:36.072 16:34:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:36.072 16:34:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:36.072 16:34:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.072 16:34:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.072 16:34:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.612 16:34:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:38.612 00:35:38.612 real 1m18.287s 00:35:38.612 user 8m10.002s 00:35:38.612 sys 0m18.198s 00:35:38.612 16:34:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:38.612 16:34:37 -- common/autotest_common.sh@10 -- # set +x 00:35:38.612 ************************************ 00:35:38.612 END TEST nvmf_dif 00:35:38.612 ************************************ 00:35:38.612 16:34:37 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:38.612 16:34:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:38.612 16:34:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:38.612 16:34:37 -- common/autotest_common.sh@10 -- # set +x 00:35:38.612 ************************************ 00:35:38.612 START TEST nvmf_abort_qd_sizes 00:35:38.612 ************************************ 00:35:38.612 16:34:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:38.612 * Looking for test storage... 00:35:38.612 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:35:38.612 16:34:37 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.612 16:34:37 -- nvmf/common.sh@7 -- # uname -s 00:35:38.612 16:34:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.612 16:34:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.612 16:34:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.612 16:34:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.612 16:34:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.612 16:34:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.612 16:34:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.612 16:34:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.612 16:34:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.612 16:34:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.612 16:34:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:35:38.612 16:34:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:35:38.612 16:34:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.612 16:34:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.612 16:34:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:38.612 16:34:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:38.612 16:34:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.612 16:34:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.612 16:34:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.612 16:34:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.612 16:34:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.612 16:34:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.612 16:34:37 -- paths/export.sh@5 -- # export PATH 00:35:38.612 16:34:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.612 16:34:37 -- nvmf/common.sh@46 -- # : 0 00:35:38.612 16:34:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:38.612 16:34:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:38.612 16:34:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:38.612 16:34:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.613 16:34:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.613 16:34:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:38.613 16:34:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:38.613 16:34:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:38.613 16:34:37 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:38.613 16:34:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:38.613 16:34:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.613 16:34:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:38.613 16:34:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:38.613 16:34:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:38.613 16:34:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.613 16:34:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:38.613 16:34:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.613 16:34:37 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:35:38.613 16:34:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:38.613 16:34:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:38.613 16:34:37 -- common/autotest_common.sh@10 -- # set +x 00:35:43.895 16:34:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:43.895 16:34:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:43.895 16:34:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:43.895 16:34:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:43.895 16:34:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:43.895 16:34:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:43.895 16:34:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:43.895 16:34:42 -- nvmf/common.sh@294 -- # net_devs=() 00:35:43.895 16:34:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:43.895 16:34:42 -- nvmf/common.sh@295 -- # e810=() 00:35:43.895 16:34:42 -- nvmf/common.sh@295 -- # local -ga e810 00:35:43.895 16:34:42 -- nvmf/common.sh@296 -- # x722=() 00:35:43.895 16:34:42 -- nvmf/common.sh@296 -- # local -ga x722 00:35:43.895 16:34:42 -- nvmf/common.sh@297 -- # mlx=() 00:35:43.895 16:34:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:43.895 16:34:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.895 16:34:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:43.895 16:34:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@326 -- # [[ '' == mlx5 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@328 -- # [[ '' == e810 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@330 -- # [[ '' == x722 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:43.895 16:34:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:43.895 16:34:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:35:43.895 Found 0000:27:00.0 (0x8086 - 0x159b) 00:35:43.895 16:34:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:43.895 16:34:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:35:43.895 Found 0000:27:00.1 (0x8086 - 0x159b) 00:35:43.895 16:34:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:43.895 16:34:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:43.895 16:34:42 -- nvmf/common.sh@371 -- # [[ '' == e810 ]] 00:35:43.896 16:34:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:43.896 16:34:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.896 16:34:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:43.896 16:34:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.896 16:34:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:35:43.896 Found net devices under 0000:27:00.0: cvl_0_0 00:35:43.896 16:34:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.896 16:34:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:43.896 16:34:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.896 16:34:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:43.896 16:34:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.896 16:34:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:35:43.896 Found net devices under 0000:27:00.1: cvl_0_1 00:35:43.896 16:34:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.896 16:34:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:43.896 16:34:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:43.896 16:34:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:43.896 16:34:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:43.896 16:34:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:43.896 16:34:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.896 16:34:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.896 16:34:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.896 16:34:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:43.896 16:34:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.896 16:34:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.896 16:34:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:43.896 16:34:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.896 16:34:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.896 16:34:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:43.896 16:34:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:43.896 16:34:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.896 16:34:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.896 16:34:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.896 16:34:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.896 16:34:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:43.896 16:34:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.896 16:34:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.896 16:34:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.896 16:34:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:43.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:35:43.896 00:35:43.896 --- 10.0.0.2 ping statistics --- 00:35:43.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.896 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:35:43.896 16:34:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:35:43.896 00:35:43.896 --- 10.0.0.1 ping statistics --- 00:35:43.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.896 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:35:43.896 16:34:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.896 16:34:42 -- nvmf/common.sh@410 -- # return 0 00:35:43.896 16:34:42 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:43.896 16:34:42 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:35:46.432 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.432 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.432 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:35:46.756 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:46.756 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:35:47.066 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:35:47.324 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:35:47.583 16:34:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.583 16:34:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:47.583 16:34:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:47.583 16:34:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.583 16:34:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:47.583 16:34:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:47.583 16:34:46 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:47.583 16:34:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:47.583 16:34:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:47.583 16:34:46 -- common/autotest_common.sh@10 -- # set +x 00:35:47.583 16:34:46 -- nvmf/common.sh@469 -- # nvmfpid=3354640 00:35:47.583 16:34:46 -- nvmf/common.sh@470 -- # waitforlisten 3354640 00:35:47.583 16:34:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:47.583 16:34:46 -- common/autotest_common.sh@819 -- # '[' -z 3354640 ']' 00:35:47.583 16:34:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.583 16:34:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:47.583 16:34:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.583 16:34:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:47.583 16:34:46 -- common/autotest_common.sh@10 -- # set +x 00:35:47.583 [2024-04-23 16:34:46.446808] Starting SPDK v24.01.1-pre git sha1 36faa8c312b / DPDK 23.11.0 initialization... 00:35:47.583 [2024-04-23 16:34:46.446911] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.842 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.842 [2024-04-23 16:34:46.572692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.842 [2024-04-23 16:34:46.672151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:47.842 [2024-04-23 16:34:46.672338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.842 [2024-04-23 16:34:46.672353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.842 [2024-04-23 16:34:46.672363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.842 [2024-04-23 16:34:46.672441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.842 [2024-04-23 16:34:46.672545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:47.843 [2024-04-23 16:34:46.672698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.843 [2024-04-23 16:34:46.672707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:48.415 16:34:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:48.415 16:34:47 -- common/autotest_common.sh@852 -- # return 0 00:35:48.415 16:34:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:48.415 16:34:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:48.415 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.415 16:34:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:48.415 16:34:47 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:48.415 16:34:47 -- scripts/common.sh@312 -- # local nvmes 00:35:48.415 16:34:47 -- scripts/common.sh@314 -- # [[ -n 0000:03:00.0 0000:c9:00.0 ]] 00:35:48.415 16:34:47 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:48.415 16:34:47 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:48.415 16:34:47 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:03:00.0 ]] 00:35:48.415 16:34:47 -- scripts/common.sh@322 -- # uname -s 00:35:48.415 16:34:47 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:48.415 16:34:47 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:48.415 16:34:47 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:48.415 16:34:47 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:35:48.415 16:34:47 -- scripts/common.sh@322 -- # uname -s 00:35:48.415 16:34:47 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:48.415 16:34:47 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:48.415 16:34:47 -- scripts/common.sh@327 -- # (( 2 )) 00:35:48.415 16:34:47 -- scripts/common.sh@328 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:03:00.0 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:48.415 16:34:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:48.415 16:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:48.415 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.415 ************************************ 00:35:48.415 START TEST spdk_target_abort 00:35:48.415 ************************************ 00:35:48.415 16:34:47 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:48.415 16:34:47 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:03:00.0 -b spdk_target 00:35:48.415 16:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.415 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.677 spdk_targetn1 00:35:48.677 16:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.677 16:34:47 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:48.677 16:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.677 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.677 [2024-04-23 16:34:47.594514] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.677 16:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.677 16:34:47 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:48.677 16:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.677 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.938 16:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:48.938 16:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.938 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.938 16:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:48.938 16:34:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:48.938 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:35:48.938 [2024-04-23 16:34:47.622776] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.938 16:34:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.938 16:34:47 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:48.938 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.226 Initializing NVMe Controllers 00:35:52.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:52.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:52.227 Initialization complete. Launching workers. 00:35:52.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 13094, failed: 0 00:35:52.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1208, failed to submit 11886 00:35:52.227 success 863, unsuccess 345, failed 0 00:35:52.227 16:34:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.227 16:34:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:52.227 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.516 Initializing NVMe Controllers 00:35:55.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:55.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:55.516 Initialization complete. Launching workers. 00:35:55.516 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8870, failed: 0 00:35:55.516 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1218, failed to submit 7652 00:35:55.516 success 359, unsuccess 859, failed 0 00:35:55.516 16:34:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:55.516 16:34:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:55.516 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.807 Initializing NVMe Controllers 00:35:58.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:58.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:58.807 Initialization complete. Launching workers. 00:35:58.807 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 40429, failed: 0 00:35:58.807 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2601, failed to submit 37828 00:35:58.807 success 611, unsuccess 1990, failed 0 00:35:58.807 16:34:57 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:58.807 16:34:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:58.807 16:34:57 -- common/autotest_common.sh@10 -- # set +x 00:35:58.807 16:34:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:58.807 16:34:57 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:58.807 16:34:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:58.807 16:34:57 -- common/autotest_common.sh@10 -- # set +x 00:35:59.743 16:34:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:59.743 16:34:58 -- target/abort_qd_sizes.sh@62 -- # killprocess 3354640 00:35:59.743 16:34:58 -- common/autotest_common.sh@926 -- # '[' -z 3354640 ']' 00:35:59.743 16:34:58 -- common/autotest_common.sh@930 -- # kill -0 3354640 00:35:59.743 16:34:58 -- common/autotest_common.sh@931 -- # uname 00:35:59.743 16:34:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:59.743 16:34:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3354640 00:35:59.743 16:34:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:59.743 16:34:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:59.743 16:34:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3354640' 00:35:59.743 killing process with pid 3354640 00:35:59.743 16:34:58 -- common/autotest_common.sh@945 -- # kill 3354640 00:35:59.743 16:34:58 -- common/autotest_common.sh@950 -- # wait 3354640 00:36:00.003 00:36:00.003 real 0m11.541s 00:36:00.003 user 0m45.652s 00:36:00.003 sys 0m2.164s 00:36:00.003 16:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.003 16:34:58 -- common/autotest_common.sh@10 -- # set +x 00:36:00.003 ************************************ 00:36:00.003 END TEST spdk_target_abort 00:36:00.003 ************************************ 00:36:00.003 16:34:58 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:36:00.003 16:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:36:00.003 16:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:00.003 16:34:58 -- common/autotest_common.sh@10 -- # set +x 00:36:00.003 ************************************ 00:36:00.003 START TEST kernel_target_abort 00:36:00.003 ************************************ 00:36:00.003 16:34:58 -- common/autotest_common.sh@1104 -- # kernel_target 00:36:00.003 16:34:58 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:36:00.003 16:34:58 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:36:00.003 16:34:58 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:36:00.003 16:34:58 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:36:00.003 16:34:58 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:36:00.003 16:34:58 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:00.003 16:34:58 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:00.003 16:34:58 -- nvmf/common.sh@627 -- # local block nvme 00:36:00.003 16:34:58 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:36:00.003 16:34:58 -- nvmf/common.sh@630 -- # modprobe nvmet 00:36:00.003 16:34:58 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:00.003 16:34:58 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:36:02.537 Waiting for block devices as requested 00:36:02.798 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:36:03.056 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.056 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.056 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.056 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.314 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.314 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.314 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.314 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.574 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.574 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.574 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.574 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.832 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.832 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:36:03.832 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:36:03.832 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:36:04.090 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:36:05.033 16:35:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:36:05.033 16:35:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:05.033 16:35:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:36:05.033 16:35:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:36:05.033 16:35:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:05.033 No valid GPT data, bailing 00:36:05.033 16:35:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:05.033 16:35:03 -- scripts/common.sh@393 -- # pt= 00:36:05.033 16:35:03 -- scripts/common.sh@394 -- # return 1 00:36:05.033 16:35:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:36:05.033 16:35:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:36:05.033 16:35:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:05.033 16:35:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:36:05.033 16:35:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:36:05.033 16:35:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:36:05.033 No valid GPT data, bailing 00:36:05.033 16:35:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:05.033 16:35:03 -- scripts/common.sh@393 -- # pt= 00:36:05.033 16:35:03 -- scripts/common.sh@394 -- # return 1 00:36:05.033 16:35:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:36:05.033 16:35:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n1 ]] 00:36:05.033 16:35:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:05.033 16:35:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:05.033 16:35:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:05.033 16:35:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:36:05.033 16:35:03 -- nvmf/common.sh@654 -- # echo 1 00:36:05.033 16:35:03 -- nvmf/common.sh@655 -- # echo /dev/nvme1n1 00:36:05.033 16:35:03 -- nvmf/common.sh@656 -- # echo 1 00:36:05.033 16:35:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:36:05.033 16:35:03 -- nvmf/common.sh@663 -- # echo tcp 00:36:05.033 16:35:03 -- nvmf/common.sh@664 -- # echo 4420 00:36:05.033 16:35:03 -- nvmf/common.sh@665 -- # echo ipv4 00:36:05.033 16:35:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:05.033 16:35:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:36:05.033 00:36:05.033 Discovery Log Number of Records 2, Generation counter 2 00:36:05.033 =====Discovery Log Entry 0====== 00:36:05.033 trtype: tcp 00:36:05.033 adrfam: ipv4 00:36:05.033 subtype: current discovery subsystem 00:36:05.033 treq: not specified, sq flow control disable supported 00:36:05.033 portid: 1 00:36:05.033 trsvcid: 4420 00:36:05.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:05.033 traddr: 10.0.0.1 00:36:05.033 eflags: none 00:36:05.033 sectype: none 00:36:05.033 =====Discovery Log Entry 1====== 00:36:05.033 trtype: tcp 00:36:05.033 adrfam: ipv4 00:36:05.033 subtype: nvme subsystem 00:36:05.033 treq: not specified, sq flow control disable supported 00:36:05.033 portid: 1 00:36:05.033 trsvcid: 4420 00:36:05.033 subnqn: kernel_target 00:36:05.033 traddr: 10.0.0.1 00:36:05.033 eflags: none 00:36:05.033 sectype: none 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:05.033 16:35:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:05.033 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.318 Initializing NVMe Controllers 00:36:08.318 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:08.318 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:08.318 Initialization complete. Launching workers. 00:36:08.318 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 40588, failed: 0 00:36:08.318 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 40588, failed to submit 0 00:36:08.318 success 0, unsuccess 40588, failed 0 00:36:08.318 16:35:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:08.318 16:35:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:08.318 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.609 Initializing NVMe Controllers 00:36:11.609 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:11.609 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:11.609 Initialization complete. Launching workers. 00:36:11.609 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 95976, failed: 0 00:36:11.609 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24306, failed to submit 71670 00:36:11.609 success 0, unsuccess 24306, failed 0 00:36:11.609 16:35:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.609 16:35:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:11.609 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.142 Initializing NVMe Controllers 00:36:14.142 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:14.142 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:14.142 Initialization complete. Launching workers. 00:36:14.142 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 89576, failed: 0 00:36:14.142 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22378, failed to submit 67198 00:36:14.142 success 0, unsuccess 22378, failed 0 00:36:14.142 16:35:13 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:36:14.142 16:35:13 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:36:14.142 16:35:13 -- nvmf/common.sh@677 -- # echo 0 00:36:14.401 16:35:13 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:36:14.401 16:35:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:14.401 16:35:13 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:14.401 16:35:13 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:14.401 16:35:13 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:36:14.401 16:35:13 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:36:14.401 00:36:14.401 real 0m14.339s 00:36:14.401 user 0m4.962s 00:36:14.401 sys 0m4.001s 00:36:14.401 16:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:14.401 16:35:13 -- common/autotest_common.sh@10 -- # set +x 00:36:14.401 ************************************ 00:36:14.401 END TEST kernel_target_abort 00:36:14.401 ************************************ 00:36:14.401 16:35:13 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:36:14.401 16:35:13 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:36:14.401 16:35:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:14.401 16:35:13 -- nvmf/common.sh@116 -- # sync 00:36:14.401 16:35:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:14.401 16:35:13 -- nvmf/common.sh@119 -- # set +e 00:36:14.401 16:35:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:14.401 16:35:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:14.401 rmmod nvme_tcp 00:36:14.401 rmmod nvme_fabrics 00:36:14.401 rmmod nvme_keyring 00:36:14.401 16:35:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:14.401 16:35:13 -- nvmf/common.sh@123 -- # set -e 00:36:14.401 16:35:13 -- nvmf/common.sh@124 -- # return 0 00:36:14.401 16:35:13 -- nvmf/common.sh@477 -- # '[' -n 3354640 ']' 00:36:14.401 16:35:13 -- nvmf/common.sh@478 -- # killprocess 3354640 00:36:14.401 16:35:13 -- common/autotest_common.sh@926 -- # '[' -z 3354640 ']' 00:36:14.401 16:35:13 -- common/autotest_common.sh@930 -- # kill -0 3354640 00:36:14.401 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3354640) - No such process 00:36:14.401 16:35:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3354640 is not found' 00:36:14.401 Process with pid 3354640 is not found 00:36:14.401 16:35:13 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:14.401 16:35:13 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:36:16.933 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:36:16.933 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:36:16.933 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:36:16.933 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:36:16.933 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:36:16.933 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:36:16.933 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:36:16.933 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:36:16.934 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:36:17.191 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:36:17.191 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:36:17.191 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:36:17.191 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:36:17.449 16:35:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:17.449 16:35:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:17.449 16:35:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:17.449 16:35:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:17.449 16:35:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.449 16:35:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.449 16:35:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.352 16:35:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:19.352 00:36:19.352 real 0m41.175s 00:36:19.352 user 0m54.343s 00:36:19.352 sys 0m13.651s 00:36:19.352 16:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:19.352 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:36:19.352 ************************************ 00:36:19.352 END TEST nvmf_abort_qd_sizes 00:36:19.352 ************************************ 00:36:19.352 16:35:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:19.352 16:35:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:19.352 16:35:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:19.352 16:35:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:19.352 16:35:18 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:19.352 16:35:18 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:19.352 16:35:18 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:19.352 16:35:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:19.352 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:36:19.352 16:35:18 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:19.352 16:35:18 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:19.352 16:35:18 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:19.352 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:36:25.919 INFO: APP EXITING 00:36:25.919 INFO: killing all VMs 00:36:25.919 INFO: killing vhost app 00:36:25.919 INFO: EXIT DONE 00:36:28.558 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:36:28.558 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:36:28.558 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:36:28.558 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:36:31.858 Cleaning 00:36:31.859 Removing: /var/run/dpdk/spdk0/config 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:31.859 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:31.859 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:31.859 Removing: /var/run/dpdk/spdk1/config 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:31.859 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:31.859 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:31.859 Removing: /var/run/dpdk/spdk2/config 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:31.859 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:31.859 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:31.859 Removing: /var/run/dpdk/spdk3/config 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:31.859 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:31.859 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:31.859 Removing: /var/run/dpdk/spdk4/config 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:31.859 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:31.859 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:31.859 Removing: /dev/shm/nvmf_trace.0 00:36:31.859 Removing: /dev/shm/spdk_tgt_trace.pid2866088 00:36:31.859 Removing: /var/run/dpdk/spdk0 00:36:31.859 Removing: /var/run/dpdk/spdk1 00:36:31.859 Removing: /var/run/dpdk/spdk2 00:36:31.859 Removing: /var/run/dpdk/spdk3 00:36:31.859 Removing: /var/run/dpdk/spdk4 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2863513 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2866088 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2867101 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2868434 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2869445 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2869800 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2870418 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2870807 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2871172 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2871490 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2871818 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2872155 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2872808 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2876165 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2876657 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2876991 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2877039 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2877941 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2878236 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2879014 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2879182 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2879513 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2879808 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2880139 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2880157 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2881147 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2881468 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2881818 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2883977 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2885785 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2887632 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2889438 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2891412 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2893346 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2895176 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2897250 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2899076 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2901163 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2903247 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2905420 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2907451 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2909257 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2911352 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2913158 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2915138 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2917064 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2919057 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2920968 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2922862 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2924867 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2926685 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2928769 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2930587 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2932457 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2934487 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2936286 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2938367 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2940751 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2942849 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2944649 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2946677 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2948544 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2950514 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2952435 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2954259 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2956098 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2958166 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2959961 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2961843 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2963856 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2965671 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2967532 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2969836 00:36:31.859 Removing: /var/run/dpdk/spdk_pid2974249 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3067774 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3073408 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3083801 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3090076 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3094658 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3095534 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3100374 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3100697 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3105596 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3112177 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3115215 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3127079 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3137865 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3139943 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3141216 00:36:31.859 Removing: /var/run/dpdk/spdk_pid3161064 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3165583 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3170655 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3172535 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3174918 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3175104 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3175266 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3175565 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3176489 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3178594 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3179863 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3180497 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3187690 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3193977 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3199919 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3239081 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3243903 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3252662 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3252670 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3257615 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3257913 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3258125 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3258710 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3258722 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3259803 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3261739 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3263804 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3265851 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3267691 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3269766 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3276495 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3277226 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3278725 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3279744 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3285395 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3288599 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3294906 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3301519 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3308348 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3310442 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3312251 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3314338 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3316490 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3317329 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3318011 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3318815 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3320253 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3328246 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3328356 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3334109 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3336648 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3339192 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3340713 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3343119 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3344738 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3354953 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3355552 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3356229 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3359282 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3359848 00:36:31.860 Removing: /var/run/dpdk/spdk_pid3360431 00:36:31.860 Clean 00:36:32.121 killing process with pid 2812306 00:36:42.120 killing process with pid 2812303 00:36:42.120 killing process with pid 2812305 00:36:42.120 killing process with pid 2812304 00:36:42.120 16:35:39 -- common/autotest_common.sh@1436 -- # return 0 00:36:42.120 16:35:39 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:42.120 16:35:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:42.120 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:36:42.120 16:35:39 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:42.120 16:35:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:42.120 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:36:42.120 16:35:39 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:36:42.120 16:35:39 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:36:42.120 16:35:39 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:36:42.120 16:35:39 -- spdk/autotest.sh@394 -- # hash lcov 00:36:42.120 16:35:39 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:42.120 16:35:39 -- spdk/autotest.sh@396 -- # hostname 00:36:42.120 16:35:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-03 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:36:42.120 geninfo: WARNING: invalid characters removed from testname! 00:37:04.087 16:36:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:04.087 16:36:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:04.655 16:36:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:06.559 16:36:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:07.493 16:36:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:08.869 16:36:07 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:37:10.245 16:36:08 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:10.245 16:36:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:37:10.245 16:36:08 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:10.245 16:36:08 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.245 16:36:08 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.245 16:36:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.246 16:36:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.246 16:36:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.246 16:36:08 -- paths/export.sh@5 -- $ export PATH 00:37:10.246 16:36:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.246 16:36:08 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:37:10.246 16:36:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:10.246 16:36:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713882968.XXXXXX 00:37:10.246 16:36:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713882968.XMRY3e 00:37:10.246 16:36:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:10.246 16:36:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:37:10.246 16:36:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:37:10.246 16:36:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:10.246 16:36:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:10.246 16:36:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:10.246 16:36:08 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:10.246 16:36:08 -- common/autotest_common.sh@10 -- $ set +x 00:37:10.246 16:36:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:37:10.246 16:36:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:37:10.246 16:36:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:10.246 16:36:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:10.246 16:36:08 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:10.246 16:36:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:10.246 16:36:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:10.246 16:36:08 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:10.246 16:36:08 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:10.246 16:36:08 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:37:10.246 16:36:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:10.246 + [[ -n 2769411 ]] 00:37:10.246 + sudo kill 2769411 00:37:10.255 [Pipeline] } 00:37:10.273 [Pipeline] // stage 00:37:10.280 [Pipeline] } 00:37:10.297 [Pipeline] // timeout 00:37:10.302 [Pipeline] } 00:37:10.318 [Pipeline] // catchError 00:37:10.323 [Pipeline] } 00:37:10.337 [Pipeline] // wrap 00:37:10.343 [Pipeline] } 00:37:10.355 [Pipeline] // catchError 00:37:10.363 [Pipeline] stage 00:37:10.365 [Pipeline] { (Epilogue) 00:37:10.376 [Pipeline] catchError 00:37:10.377 [Pipeline] { 00:37:10.389 [Pipeline] echo 00:37:10.391 Cleanup processes 00:37:10.397 [Pipeline] sh 00:37:10.684 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:10.684 3376737 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:10.699 [Pipeline] sh 00:37:10.983 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:37:10.983 ++ grep -v 'sudo pgrep' 00:37:10.983 ++ awk '{print $1}' 00:37:10.983 + sudo kill -9 00:37:10.983 + true 00:37:10.996 [Pipeline] sh 00:37:11.281 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:21.286 [Pipeline] sh 00:37:21.563 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:21.563 Artifacts sizes are good 00:37:21.576 [Pipeline] archiveArtifacts 00:37:21.582 Archiving artifacts 00:37:21.843 [Pipeline] sh 00:37:22.182 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:37:22.198 [Pipeline] cleanWs 00:37:22.208 [WS-CLEANUP] Deleting project workspace... 00:37:22.208 [WS-CLEANUP] Deferred wipeout is used... 00:37:22.214 [WS-CLEANUP] done 00:37:22.216 [Pipeline] } 00:37:22.238 [Pipeline] // catchError 00:37:22.252 [Pipeline] sh 00:37:22.534 + logger -p user.info -t JENKINS-CI 00:37:22.544 [Pipeline] } 00:37:22.561 [Pipeline] // stage 00:37:22.567 [Pipeline] } 00:37:22.585 [Pipeline] // node 00:37:22.591 [Pipeline] End of Pipeline 00:37:22.635 Finished: SUCCESS